PaStiX issueshttps://gitlab.inria.fr/solverstack/pastix/-/issues2017-08-23T09:14:53+02:00https://gitlab.inria.fr/solverstack/pastix/-/issues/2CRS and CCS format2017-08-23T09:14:53+02:00Ghost UserCRS and CCS formatDoes PaStiX handle compressed row formats? I tried running from the master branch and got a segfault when attempting to use my sparse matrix after passing row and column pointers to PaStiX (from an Eigen::SparseMatrix)
When I searched ...Does PaStiX handle compressed row formats? I tried running from the master branch and got a segfault when attempting to use my sparse matrix after passing row and column pointers to PaStiX (from an Eigen::SparseMatrix)
When I searched through the code to understand it, I came across Line 358 in `spm.c` where it sets the matrix to CCS by default and ignores the input argument layout.
What does PaStiX do with the row and column pointers that are passed to the `pastix_spm_s` struct? The problem occurs just after calling `pastixFinalize()`.https://gitlab.inria.fr/solverstack/pastix/-/issues/3Compiler warnings for abs2017-08-07T10:18:26+02:00Ghost UserCompiler warnings for absWhen compiling the tests with gcc-7, warnings are reported:
`warning: using integer absolute value function 'abs' when argument is
of floating point type [-Wabsolute-value]
result = abs(norms - normd) / (norms * eps);`
`note:...When compiling the tests with gcc-7, warnings are reported:
`warning: using integer absolute value function 'abs' when argument is
of floating point type [-Wabsolute-value]
result = abs(norms - normd) / (norms * eps);`
`note: use function 'fabs' instead
result = abs(norms - normd) / (norms * eps);
^~~
fabs`https://gitlab.inria.fr/solverstack/pastix/-/issues/4Compiler error with ifort 17.0.0 on pastixf.f902017-09-01T14:40:48+02:00AUMAGE OlivierCompiler error with ifort 17.0.0 on pastixf.f90Hi,
PaStiX rev "master 628bc7b" fails to build, producing the compiler error below on file pastixf.f90. The Fortran compiler used is Intel ifort 17.0.0.
```
Building Fortran object wrappers/fortran90/CMakeFiles/pastixf.dir/src/pastixf....Hi,
PaStiX rev "master 628bc7b" fails to build, producing the compiler error below on file pastixf.f90. The Fortran compiler used is Intel ifort 17.0.0.
```
Building Fortran object wrappers/fortran90/CMakeFiles/pastixf.dir/src/pastixf.f90.o
/home/cvtoauma/Linalg/pastix.git/wrappers/fortran90/src/pastixf.f90(548): error #8011: A pointer dummy argument with the
INTENT(IN) attribute shall not appear as an actual argument if the associated dummy argument has the INTENT(OUT) or INTENT(INOUT)
attribute. [FILENAME]
call c_f_pointer(filename_aux, filename)
-----------------------------------^
compilation aborted for /home/cvtoauma/Linalg/pastix.git/wrappers/fortran90/src/pastixf.f90 (code 1)
```
The following patch seems to fix the issue (the compiler seems to assume that all the arguments of pointed functions are inout):
```
diff --git a/wrappers/fortran90/src/pastixf.f90 b/wrappers/fortran90/src/pastixf.f90
index ee1fc94..72c6787 100644
--- a/wrappers/fortran90/src/pastixf.f90
+++ b/wrappers/fortran90/src/pastixf.f90
@@ -538,7 +538,7 @@ contains
real(kind=c_double), intent(inout), dimension(dparm_size), target :: dparm
integer(kind=c_int), intent(inout), target :: check
integer(c_int), intent(inout), target :: driver
- character(kind=c_char), intent(in), pointer :: filename
+ character(kind=c_char), intent(inout), pointer :: filename
type(c_ptr) :: argv_aux
type(c_ptr) :: filename_aux
```
Best regards,
--
Olivierhttps://gitlab.inria.fr/solverstack/pastix/-/issues/5Symbolic factorization segfault (for MaPHyS integration)2017-11-09T10:22:18+01:00MARAIT GillesSymbolic factorization segfault (for MaPHyS integration)When using MaPHyS + Pastix 6 on the matrix young1c with 8 processes, we obtain a segmentation fault during symbolic factorization.
Symbolic Factorization :
Symbol factorization using: Fax
*** Error in `./zmph_paddle': realloc()...When using MaPHyS + Pastix 6 on the matrix young1c with 8 processes, we obtain a segmentation fault during symbolic factorization.
Symbolic Factorization :
Symbol factorization using: Fax
*** Error in `./zmph_paddle': realloc(): invalid next size: 0x00000000026222f0 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f1c002957e5]
After investigation, it seems that matrices 3 and 6 are concerned by the segmentation fault.
Attached files:
- the 8 subdomain matrices. [matrices.tar](/uploads/529d4331b0d2b495f8f486b12f0cc4d3/matrices.tar)
- the 8 domains description. The schurlist corresponds to the "myindexintrf" in the file. [domains.tar](/uploads/ac8cfc06f15209a99689225c57d1cb2f/domains.tar)
- The 8 "ordergen" and "symbgen" files dumped with IPARM_IO_STRATEGY = PastixIOSave. [newpastix_ordergen.tar](/uploads/e6b2f1567a4c005319d52178d7dd95db/newpastix_ordergen.tar)
- The same 8 "ordergen" and "symbgen" files but with the old pastix. [oldpastix_ordergen.tar](/uploads/1440318ed8cad15bb58294ea2b2ddd98/oldpastix_ordergen.tar)
NB: This issue appeared with 4 processes before the last commits.https://gitlab.inria.fr/solverstack/pastix/-/issues/6Expose pastixInitWithAffinity into the fortran wrapper2017-11-17T21:09:43+01:00KUHN MatthieuExpose pastixInitWithAffinity into the fortran wrapperWould be usefull to bind threads manually when using PaStiX into MaPHyS, so that PaStiX+MaPHyS = <3Would be usefull to bind threads manually when using PaStiX into MaPHyS, so that PaStiX+MaPHyS = <3https://gitlab.inria.fr/solverstack/pastix/-/issues/7Error in compilation (Fortran + intel)2017-12-01T13:11:52+01:00MARAIT GillesError in compilation (Fortran + intel)When compiling with spack pastix@solverstack+blasmt on Occigen, I get an error:
.../pastix/wrappers/fortran90/examples/fstep-by-step.f90(64): error #6691: A pointer dummy argument may only be argument associated with a pointer. [SPM]
...When compiling with spack pastix@solverstack+blasmt on Occigen, I get an error:
.../pastix/wrappers/fortran90/examples/fstep-by-step.f90(64): error #6691: A pointer dummy argument may only be argument associated with a pointer. [SPM]
call pastix_subtask_order( pastix_data, spm, null(), info )
------------------------------------------^
I think for the intel compiler, spm should be a 'pointer' and not a 'target', even though it is not a problem for the gnu compiler.https://gitlab.inria.fr/solverstack/pastix/-/issues/8Ordering issue on occigen2017-12-04T15:12:12+01:00KUHN MatthieuOrdering issue on occigenThere is an issue into the ordering of pastix on the occigen cluster when launching
the simple example on a laplacian.
Pastix has been installed with both scotch 6.0.4 and 5.1.10b, with mkl and intel compiler 17.0.0.
The simple exampl...There is an issue into the ordering of pastix on the occigen cluster when launching
the simple example on a laplacian.
Pastix has been installed with both scotch 6.0.4 and 5.1.10b, with mkl and intel compiler 17.0.0.
The simple example with a 1D laplacian (e.g. ./simple --lap 100 -t 4) is fine.
However, when considering a 2D laplacian (e.g. ./simple --lap 100:100 -t 4), the run
gets stuck into the ordering step of pastix, more precisely into scotch. See this ddt screenshot for more details: ![ddt](/uploads/00887cf9877a1d5f3426e9a3b6e090ee/ddt.png)
A similar behavior has been observed when attempting to use pastix into maphys on occigen on maphys' classical examples.https://gitlab.inria.fr/solverstack/pastix/-/issues/9MaPHyS + sparse pcd+ Pastix -> error in order_apply_level_order2018-02-19T16:42:28+01:00MARAIT GillesMaPHyS + sparse pcd+ Pastix -> error in order_apply_level_orderWhen using MaPHyS + Pastix with sparse preconditioning, we obtain a memory error in order_apply_level_order.c:276
It seems that the tree has cycles in it. Attached are the 2 spm files of the 2 processes used.[spmfiles.tgz](/uploads/c2f4...When using MaPHyS + Pastix with sparse preconditioning, we obtain a memory error in order_apply_level_order.c:276
It seems that the tree has cycles in it. Attached are the 2 spm files of the 2 processes used.[spmfiles.tgz](/uploads/c2f4d1b37702aac7669ec905435fa6c4/spmfiles.tgz)
NB: when setting iparm(IPARM_TASKS2D_LEVEL) = 0 in MaPHyS, we do not enter this part of the code and the error does not occur.https://gitlab.inria.fr/solverstack/pastix/-/issues/10Distributed matrix format2018-07-20T15:51:39+02:00Ghost UserDistributed matrix formatI have a question about the distributed matrix format in PaStiX.
Correct me if I'm wrong, but for the previous version is wasn't permitted to have a column split across more than one process, i.e. if the same column appeared in the `loc...I have a question about the distributed matrix format in PaStiX.
Correct me if I'm wrong, but for the previous version is wasn't permitted to have a column split across more than one process, i.e. if the same column appeared in the `loc2glob` vector in more than one process. From the user side it's definitely easier to assemble the stiffness matrix without much thought to the parallel environment and pass in the matrix with local indexing and the `loc2glob` vector during the solution stage. Will this restriction be present in the MPI release of PaStiX 6 too?https://gitlab.inria.fr/solverstack/pastix/-/issues/11spm nnz overflow2018-01-11T15:14:48+01:00Ghost Userspm nnz overflowThe current sparse matrix struct uses a pastix_int_t (potentially 32bit) for the nnz count. This could easily overflow a 32 bit integer when the other matrix indices will not. Should this be size_t instead?The current sparse matrix struct uses a pastix_int_t (potentially 32bit) for the nnz count. This could easily overflow a 32 bit integer when the other matrix indices will not. Should this be size_t instead?https://gitlab.inria.fr/solverstack/pastix/-/issues/12Fortran mangling with icc 17.02018-03-08T15:55:42+01:00KUHN MatthieuFortran mangling with icc 17.0https://gitlab.inria.fr/solverstack/pastix/-/issues/13Check multi-RHS and add a CI testing2018-07-23T13:01:15+02:00RAMET PierreCheck multi-RHS and add a CI testingMathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/14API migration from PaStiX 5 to PaStiX 6 (comprehension check)2018-02-21T18:45:23+01:00Andrea PiacentiniAPI migration from PaStiX 5 to PaStiX 6 (comprehension check)While porting my F90 application from PaStiX 5 to PaStiX 6 I'm not completely sure of all the parameters "translation".
In particular,
* is the old
```
iparm(IPARM_SYM) = API_SYM_YES
```
completely replaced by the spm featur...While porting my F90 application from PaStiX 5 to PaStiX 6 I'm not completely sure of all the parameters "translation".
In particular,
* is the old
```
iparm(IPARM_SYM) = API_SYM_YES
```
completely replaced by the spm feature
```
spm%mtxtype = PastixSymmetric
```
* is the old
```
iparm(IPARM_MATRIX_VERIFICATION) = API_YES
```
equivalent to a beforehand call to
```
call spmCheckAndCorrect( spm, spm2 )
```
* is the old
```
iparm(IPARM_RHS_MAKING) = API_RHS_B
```
equivalent to a beforehand call to
```
call spmGenRHS(
```
* is there any other important new tunable feature that we did not use to have in PaStiX 5?Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/15Compilation and link options generation tool2018-03-05T21:56:57+01:00Andrea PiacentiniCompilation and link options generation toolTo ease portability of our applications on different platforms where PaStiX is installed with customized compilation options, we relied on the `pastix-conf` tool with the useful `--fc`, `--fcopts` etc options.
Is there any plan to rein...To ease portability of our applications on different platforms where PaStiX is installed with customized compilation options, we relied on the `pastix-conf` tool with the useful `--fc`, `--fcopts` etc options.
Is there any plan to reinstate it?
For the moment, a similar information is contained in
```
build/example/make
```
but only for C applications.
Furthermore, `pkg-config` fails if hwloc was preinstalled and not known in `PKG_CONFIG_PATH`
In the specific case of intel ifort 16.0.4 (using mkl), some of the options turned on by ctest seem not to be strictly necessary, but I wonder if they could become meaningful in some situation.
They are `-f77rtl` as a compilation option, and the
`-lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lz -lm -lrt -lirng -ldecimal -lcilkrts -lstdc++` library links.
Most probably, the latter are installed as standard or default libraries on my test machine (and I am currently unable to remember how to display the full list of standard and default libraries for ifort).Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/16Check list of the PaStiX 6 implementation of the CERFACS customized features ...2018-07-23T16:47:15+02:00Andrea PiacentiniCheck list of the PaStiX 6 implementation of the CERFACS customized features in PaStiX 5For the effective integration of a sequential threadsafe version of PaStiX 5 as a routine called from an OpenMP region of a hybrid MPI+OpenMP application we had to customize both the sources and the compilation options of PaStiX 5.
Jus...For the effective integration of a sequential threadsafe version of PaStiX 5 as a routine called from an OpenMP region of a hybrid MPI+OpenMP application we had to customize both the sources and the compilation options of PaStiX 5.
Just to be more than sure that no customization is needed in PaStiX 6, here is a list of what we had to do
* *purely sequential version of PaStiX 5*
This was obtained by setting
```
-DFORCE_NOMPI
-DFORCE_NOSMP
```
and removing
```
-DCUDA_SM_VERSION=...
```
at compilation.
Is it now, simply enough to set `iparm(IPARM_THREAD_NBR) = 1` and `iparm(IPARM_VERBOSE) = PastixVerboseNot` to avoid any interference or rush condition?
* *activation of multiple RHS*
We had to explicitly activate
```
-DMULT_SMX
```
at compilation. I guess this is not necessary anymore (See issue #13).
* *algebra on multiple RHS*
Moreover, working with @faverge on the specific topic, we concluded that using BLAS2 for the operations on the multiple RHS was counterproductive if `nrhs` was actually set to 1. Is the specific case now handled separately?
* *memory management for multiple RHS*
In the same occasion we noticed a great performance improvement if the `STORAGE` mode was activated, which it was NOT by default. How has this aspect been ported to PaStiX 6? Is it a parametered choice ?
* *dependence on the non threadsafe section of Scotch 6.0.4*
A single treatment inside Scotch is not threadsafe. We made it critical by an OpenMP pragma, while in PaStiX 6 it is explicitly handled as atomic. Has this feature been tested in an intensive OpenMP application?
My tests up to 32 threads all passed once, but the bug is not systematic, therefore an extensive validation, also on the impact on performances is required.
By the way, is there any release announcement for a threadsafe Scotch?Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/17Test cases timing out or failing with intel162018-03-05T11:02:57+01:00Andrea PiacentiniTest cases timing out or failing with intel16With a standard installation under intel 16
```
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/pastix_6.0.0 -DSCOTCH_DIR=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/scotch_6.0.4 -...With a standard installation under intel 16
```
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/pastix_6.0.0 -DSCOTCH_DIR=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/scotch_6.0.4 -DHWLOC_DIR=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/hwloc-1.11.3 -DPASTIX_INT64=OFF
```
the following ctests fails on a timeout
example_cg_simple
from example/CTestTestfile.cmake
and
test_hb_spm_convert_tests
test_hb_spm_norm_tests
test_hb_spm_matvec_tests
test_hb_spm_dof_expand_tests
test_hb_spm_dof_norm_tests
test_hb_spm_dof_matvec_tests
from test/CTestTestfile.cmake
and the following fail on a SEGFAULT
The following tests FAILED:
125 - example_hb_simple (SEGFAULT)
127 - example_gmres_simple (SEGFAULT)
128 - example_bicgstab_simple (SEGFAULT)
366 - test_hb_bcsc_norm_tests (SEGFAULT)
367 - test_hb_bcsc_matvec_tests (SEGFAULT)
Totalview indicates that the timeout is reached on
```
__lll_lock_wait_private, FP=7ffd1c8502d0
_L_lock_49, FP=7ffd1c850350
_IO_fgets, FP=7ffd1c850370
readHB_newmat_double, FP=7ffd1c852590
readHB, FP=7ffd1c8525e0
spmReadDriver, FP=7ffd1c852670
main, FP=7ffd1c852880
__libc_start_main, FP=7ffd1c852940
_start, FP=7ffd1c852948
```Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/18Need of "in place" format conversions2018-06-04T19:27:57+02:00Andrea PiacentiniNeed of "in place" format conversionsFor the sake of memory economy, we used to convert IJV (a.k.a. COO) matrices to the CSC format by a call to the Sparskit
```
SUBROUTINE coocsr_inplace ( n, nnz, job, a, ja, ia, iwk )
```
We wonder if spmConvert works in place or generate...For the sake of memory economy, we used to convert IJV (a.k.a. COO) matrices to the CSC format by a call to the Sparskit
```
SUBROUTINE coocsr_inplace ( n, nnz, job, a, ja, ia, iwk )
```
We wonder if spmConvert works in place or generate a second full spm and replaces the INOUT argument before returning.
Notice that coocsr_inplace only needs an integer work array `iwk(n+1)`Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/19Factorize multiple sparse matrices stored in multi-dimensional Fortran arrays2018-03-06T17:37:52+01:00Andrea PiacentiniFactorize multiple sparse matrices stored in multi-dimensional Fortran arraysThe PaStiX 5 fortran API allowed for the access to matrices stored as columns of a multidimensional array.
As an example, an application could have to choose a given matrix among a predefined set, accordingly to some run-time condition...The PaStiX 5 fortran API allowed for the access to matrices stored as columns of a multidimensional array.
As an example, an application could have to choose a given matrix among a predefined set, accordingly to some run-time condition.
The matrices could be stored with an extra index `self%il_ia(:,:), self%ila_ja(:,:), self%rla_L(:,:)` where the first dimension is the usual storage and the second is the linear system identfier.
In such a case, PaStiX 5 is called for the linear system `il_gsys` by
```
CALL pastix_fortran(self%sla_px(il_gsys)%pastix_data, &
self%sla_px(il_gsys)%pastix_comm, &
self%sla_px(il_gsys)%n, &
self%ila_ia(:,il_gsys),self%ila_ja(:,il_gsys), &
self%rla_L(:,il_gsys), &
self%sla_px(il_gsys)%perm,self%sla_px(il_gsys)%invp, &
self%rla_L(:,il_gsys),self%sla_px(il_gsys)%nrhs, &
self%sla_px(il_gsys)%iparm,self%sla_px(il_gsys)%dparm)
```
It turns out that neither this syntax
```
self%sla_spm%rowptr = c_loc(self%ila_ia(:,il_gsys))
```
nor
```
self%sla_spm%rowptr = c_loc(self%ila_ia(1,il_gsys))
```
lead to correct results.
As a workaround, we plan to rewrite our routines using arrays of derived datatypes
```
type sys_lin
type(pastix_data_t), pointer :: pastix_data
type(pastix_spm_t), pointer :: spm
type(pastix_spm_t), pointer :: spm2
integer(kind=pastix_int_t), dimension(:), pointer :: ila_ia
integer(kind=pastix_int_t), dimension(:), pointer :: ila_ja
complex(kind=c_double_complex), dimension(:), pointer :: rla_L
end type sys_lin
type(sys_lin), dimension(:), allocatable, target :: sla_lap
...
self%sla_lap(ib)%spm%rowptr = c_loc(sla_lap(ib)%ila_ia)
self%sla_lap(ib)%spm%colptr = c_loc(sla_lap(ib)%ila_ja)
self%sla_lap(ib)%spm%values = c_loc(sla_lap(ib)%rla_L)
```https://gitlab.inria.fr/solverstack/pastix/-/issues/20Reuse of a single factorized matrix for different concurrent solve calls2018-03-06T17:38:39+01:00Andrea PiacentiniReuse of a single factorized matrix for different concurrent solve callsNext step of experiments, leading to new questions:
*Aim:* Factorize once a single matrix, then use it for different solve calls (each possibly with nrhs>1) distributed among OpenMP threads
Questions:
1. Is the first argument of `...Next step of experiments, leading to new questions:
*Aim:* Factorize once a single matrix, then use it for different solve calls (each possibly with nrhs>1) distributed among OpenMP threads
Questions:
1. Is the first argument of `pastix_task_solve` (the `pastix_data_t` structure) input only, or is it modified/updated in the call? Otherwise stated, is `pastix_task_solve` threadsafe w.r.t. the pastix data?
2. If the answer to 1. is "yes", we'd need to run the factorization in a single OpenMP thread, but using all the available cores for PaStiX pthreads
```
iparm(IPARM_THREAD_NBR) = il_ompthr
```
while the solve phase should be single-pthreaded and concurrently run on the OpenMP threads.
How can we modify the `iparm(IPARM_THREAD_NBR)` in the pastix structure after initialization?
3. If we need to iterate around the switch from the pthreaded factorization and the single-pthreaded solve, what is the default value for `iparm(IPARM_SCHEDULER)` in pthreaded sections? (We learned to set it to `PastixSchedSequential` in conjunction to `iparm(IPARM_THREAD_NBR) = 1` to switch off pthreading and avoid interferences with OpenMP, but we do not know to what it has to be set back.https://gitlab.inria.fr/solverstack/pastix/-/issues/21Pb with Python solver interface2018-02-21T18:44:41+01:00Mathieu FavergePb with Python solver interfaceThe pb reported by @lpoirel is that the the following code in not working after a few iteration:
```
import pypastix as pastix
import scipy.sparse as sps
import numpy as np
# Set the problem
for n in range(5, 100):
print(n)
A =...The pb reported by @lpoirel is that the the following code in not working after a few iteration:
```
import pypastix as pastix
import scipy.sparse as sps
import numpy as np
# Set the problem
for n in range(5, 100):
print(n)
A = sps.spdiags([np.ones(n)*i for i in [4, -1, -1, -1, -1]],
[0, 1, 3, -1, -3], n, n)
x0 = np.ones(n)
b = A.dot(x0)
# Hack to make sure that the mkl is loaded
tmp = np.eye(2).dot(np.ones(2))
# Factorize
solver = pastix.solver(A, verbose=False, thread_nbr=1)
# Solve
x = solver.solve(b, x0=x0, check=True)
solver.finalize()
```
The problem is the corruption of the spm structure that is forwarded to PaStiX.Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/22Handling of fortran writes and PaStiX generated output2018-06-04T19:35:36+02:00Andrea PiacentiniHandling of fortran writes and PaStiX generated outputI am pretty sure this is a dummy questions for people used to mixed Fortran and C programming.
Yet I am puzzled by the fact that if I introduce in the caller
```
write(6,*) '!------------'
```
statements alternating some PaStiX calls t...I am pretty sure this is a dummy questions for people used to mixed Fortran and C programming.
Yet I am puzzled by the fact that if I introduce in the caller
```
write(6,*) '!------------'
```
statements alternating some PaStiX calls that produce output as
```
call spmPrintInfo
```
or
```
call spmCheckAxb
```
while the output on screen respects the order, if I redirect the output to a file
```
./flaplacian > output_threads8_singlemat_light.out`
```
the lines are mangled as if fortran and C where concurrently and independently writing their output to the same file.
Any hint on how to recover on file what I can correctly see on screen ?
Thank youhttps://gitlab.inria.fr/solverstack/pastix/-/issues/23Output of Factorization, Solve time and GFlops (F90 api)2018-03-05T21:57:15+01:00Andrea PiacentiniOutput of Factorization, Solve time and GFlops (F90 api)On output of the calls
```
sla_lap(ib)%iparm(IPARM_VERBOSE) = PastixVerboseNot
...
! 1- Initialize the parameters and the solver
call pastixInit( sla_lap(ib)%pastix_data, 0, sla_lap(ib)%iparm, sla_lap(ib)%dparm )
! 2- Analyze the...On output of the calls
```
sla_lap(ib)%iparm(IPARM_VERBOSE) = PastixVerboseNot
...
! 1- Initialize the parameters and the solver
call pastixInit( sla_lap(ib)%pastix_data, 0, sla_lap(ib)%iparm, sla_lap(ib)%dparm )
! 2- Analyze the problem
call pastix_task_analyze( sla_lap(ib)%pastix_data, sla_lap(ib)%spm, info )
! 3- Factorize the matrix
call pastix_task_numfact( sla_lap(ib)%pastix_data, sla_lap(ib)%spm, info )
```
The diagnostic prints
```
write(6,*) ' Matrix ', ib
write(6,*) ' Time for analysys ', sla_lap(ib)%dparm(DPARM_ANALYZE_TIME)
write(6,*) ' Pred Time for fact ', sla_lap(ib)%dparm(DPARM_PRED_FACT_TIME)
write(6,*) ' Time for factorization ', sla_lap(ib)%dparm(DPARM_FACT_TIME)
write(6,*) ' GFlops/s for fact ', sla_lap(ib)%dparm(DPARM_FACT_FLOPS)
```
Give systematically null factorization times and very optimistic ;-) GFlops/s
```
Matrix 1
Time for analysys 3.892183303833008E-003
Pred Time for fact 0.115354254012610
Time for factorization 0.000000000000000E+000
GFlops/s for fact 5135859720.59899
```
Notice that, since several factorization run in parallel on OpenMP threads, the verbosity has to be switched off (set to `PastixVerboseNot`) and all the prints are postponed.
For a test, I switched off the parallelization, set the verbosity to `PastixVerboseNo` and interspersed the a posteriori write obtaning
```
+-------------------------------------------------+
Analyse step:
Number of non-zeroes in blocked L 2451183
Fill-in 14.324351
Number of operations in full-rank: LL^t 900.43 MFlops
Prediction:
Model AMD 6180 MKL
Time to factorize 1.220890e-01 s
Time for analyze 3.082991e-03 s
Time for analysys 3.082990646362305E-003
Pred Time for fact 0.122088950728251
+-------------------------------------------------+
Factorization step:
Factorization used: LL^t
Time to initialize internal csc 1.364207e-02 s
Time to initialize coeftab 1.336455e-02 s
Time to factorize 1.121373e-01 s ( 9.89 GFlop/s)
Number of operations 1.11 GFlops
Number of static pivots 17
Time for factorization 0.000000000000000E+000
GFlops/s for fact 10622676294.4213
```
Not tested yet with solution timeshttps://gitlab.inria.fr/solverstack/pastix/-/issues/24Misaligned output of spmCheckAxb (F90 api)2018-03-05T16:13:44+01:00Andrea PiacentiniMisaligned output of spmCheckAxb (F90 api)Since the last merge, the output columns of `spmCheckAxb` called from within the F90 wrapper are mangled.
Here an example (hoping that Gitlab preserve the formatting. The Preview does)
```
|| A ||_1 ...Since the last merge, the output columns of `spmCheckAxb` called from within the F90 wrapper are mangled.
Here an example (hoping that Gitlab preserve the formatting. The Preview does)
```
|| A ||_1 5.350000e+01
max(|| b_i ||_oo) 1.816403e+01
max(|| x_i ||_oo) 5.000000e-01
|| b_0 - A x_0 ||_1 3.644748e-12
|| b_0 - A x_0 ||_1 / (||A||_1 * ||x_0||_oo * eps) 2.770336e+02 (FAILED)
|| b_1 - A x_1 ||_1 3.276452e-12
|| b_1 - A x_1 ||_1 / (||A||_1 * ||x_1||_oo * eps) 2.470496e+02 (FAILED)
max(|| b_i - A x_i ||_1) 3.644748e-12
max(|| b_i - A x_i ||_1 / (||A||_1 * ||x_i||_oo * eps)) 2.770336e+02 (FAILED)
|| x0_0 ||_oo 3.086420e-14
|| x0_0 - x_0 ||_oo / (||x0_0||_oo * eps) 6.172840e+04 (FAILED)
|| x0_1 ||_oo 2.273182e-14
|| x0_1 - x_1 ||_oo / (||x0_1||_oo * eps) 4.559548e+04 (FAILED)
max(|| x0_i ||_oo) 5.000000e-01
max(|| x0_i - x_i ||_oo) 3.086420e-14
max(|| x0_i - x_i ||_oo / || x0_i ||_oo) 6.172840e+04 (FAILED)
```https://gitlab.inria.fr/solverstack/pastix/-/issues/25Memory leak using spmCheckAndCorrect in Fortran2018-06-04T21:54:24+02:00MARAIT GillesMemory leak using spmCheckAndCorrect in FortranWhen calling spmCheckAndCorrect, an spm instance is not freed.
I can see the memory leak using valgrind on the example flaplacian.
https://gitlab.inria.fr/solverstack/pastix/blob/master/wrappers/fortran90/examples/flaplacian.f90#L119
...When calling spmCheckAndCorrect, an spm instance is not freed.
I can see the memory leak using valgrind on the example flaplacian.
https://gitlab.inria.fr/solverstack/pastix/blob/master/wrappers/fortran90/examples/flaplacian.f90#L119
```fortran
call spmCheckAndCorrect( spm, spm2 )
if (.not. c_associated(c_loc(spm), c_loc(spm2))) then
deallocate(rowptr)
deallocate(colptr)
deallocate(values)
spm%rowptr = c_null_ptr
spm%colptr = c_null_ptr
spm%values = c_null_ptr
call spmExit( spm )
spm = spm2
end if
```
```
==16403== 96 bytes in 1 blocks are definitely lost in loss record 189 of 231
==16403== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16403== by 0x612022A: spmCopy (spm.c:762)
==16403== by 0x6120A4D: spmCheckAndCorrect (spm.c:687)
==16403== by 0x516E018: __spmf_MOD_spmcheckandcorrect (spmf.f90:558)
==16403== by 0x401C38: MAIN__ (flaplacian.f90:119)
==16403== by 0x40172C: main (flaplacian.f90:15)
```
I have the same memory leak with MaPHyS.
For some reason it is not the case with fsimple and fstep-by-step, but things are allocated differently so I cannot figure out why the memory leak does not occur there.https://gitlab.inria.fr/solverstack/pastix/-/issues/26Argument intent mismatch in pastixf (line 644)2018-07-11T15:02:26+02:00Andrea PiacentiniArgument intent mismatch in pastixf (line 644)The first argument `myorder` of `pastixOrderGrid` in wrappers/fortran90/src/pastixf.f90 (line 644) should have `intent(inout)` instead of `intent(in)`. Intel 16 does not accept an intent(in) argument to be passed to `c_f_pointer`The first argument `myorder` of `pastixOrderGrid` in wrappers/fortran90/src/pastixf.f90 (line 644) should have `intent(inout)` instead of `intent(in)`. Intel 16 does not accept an intent(in) argument to be passed to `c_f_pointer`https://gitlab.inria.fr/solverstack/pastix/-/issues/27Missing spmf in pkg-config for pastixf2018-07-09T12:48:21+02:00Andrea PiacentiniMissing spmf in pkg-config for pastixf`pkg-config --libs pastixf`
answers
`-L/home/pae/daimon/DAIMON_LIB/pastix_6.0.1/lib -lpastixf -lpastix -lpastix_kernels -lpastix -lpastix_kernels -lspm`
missing `-lspmf` (before `-lspm`)`pkg-config --libs pastixf`
answers
`-L/home/pae/daimon/DAIMON_LIB/pastix_6.0.1/lib -lpastixf -lpastix -lpastix_kernels -lpastix -lpastix_kernels -lspm`
missing `-lspmf` (before `-lspm`)https://gitlab.inria.fr/solverstack/pastix/-/issues/28Use of omp critical for ordering subtasks in fmultilap.f902018-07-09T13:55:41+02:00Andrea PiacentiniUse of omp critical for ordering subtasks in fmultilap.f90Keep the whole solve phase in a single OpenMP region, protecting the ordering subtasks with an `OMP CRITICAL` construct.
~~Check if the `bindtab` array is of any use in the analyze+numfact phase.~~ Already done.Keep the whole solve phase in a single OpenMP region, protecting the ordering subtasks with an `OMP CRITICAL` construct.
~~Check if the `bindtab` array is of any use in the analyze+numfact phase.~~ Already done.https://gitlab.inria.fr/solverstack/pastix/-/issues/29Some big worries with spm and recent intel compilers2018-07-23T11:53:41+02:00Andrea PiacentiniSome big worries with spm and recent intel compilersI am trying to install pastix on some of our servers.
On the Météo-France server (our main target) we'r bound to **intel 16.0.1.150** with intelmpi 5.1.2.150. In this version of the compiler, mkl does not contain LAPACKE, therefore we ...I am trying to install pastix on some of our servers.
On the Météo-France server (our main target) we'r bound to **intel 16.0.1.150** with intelmpi 5.1.2.150. In this version of the compiler, mkl does not contain LAPACKE, therefore we **cannot even compile**.
**Lucky case**, my machine, **intel 16.0.4.258** with intelmpi/5.1.3.223. Compilation ok, tests ok (but _hb_ ) no problems with the wrapper/f90 tests.
More recent: **intel 17.0.4.196** or **intel/18.0.1.163** compilation is OK, but the **flaplacian and both fmultilap tests fail**.
Some debug with intel17 shows that flaplacian passes if we comment out the deallocate at line 178:
```
call spmCheckAxb( dparm(DPARM_EPSILON_REFINEMENT), nrhs, spm, x0_ptr, spm%n, b_ptr, spm%n, x_ptr, spm%n, info )
call spmExit( spm )
!AP INTEL17 deallocate( spm )
deallocate(x0)
deallocate(x)
```
The fmultilap case works well with the test_mt.in input parameters set if we comment out the analogous deallocate at line 557
```
call spmExit( sys_array(im)%spm )
!AP INTEL17 deallocate( sys_array(im)%spm )
end do
```
Yet the problem is deeper when running with the test_seq.in input parameters set (notice that "check and correct" is disabled, therefore it runs through spmConvert):
```
!====================================================================!
Outer iteration: 1
!--------------------------------------------------------------------!
Nb of factorization performed in parallel = 1
Nb of threads used by PaStiX per factorization = 5
*** Error in `./fmultilap': double free or corruption (out): 0x00002b6e1d085400 ***
======= Backtrace: =========
[...]
fmultilap 00000000004FBB49 spmExit 208 spm.c
fmultilap 000000000051BDAF z_spmConvertIJV2C 84 z_spm_convert_to_csc.c
fmultilap 00000000004FC13E spmConvert 391 spm.c
fmultilap 000000000041A279 spmf_mp_spmconver 461 spmf.f90
fmultilap 00000000004159FE fmultilap_IP_mult 859 fmultilap.f90
fmultilap 0000000000410E73 MAIN__ 292 fmultilap.f90
```https://gitlab.inria.fr/solverstack/pastix/-/issues/30Local system missing *Config.cmake2018-07-20T15:29:27+02:00Ghost UserLocal system missing *Config.cmakeI'm trying to compile PaStiX but there are configure errors for CMake. These packages don't provide a CMake config file as part of their installation:
```
-- Building for target x86_64
-- Found target X86_64
CMake Warning at CMakeLists...I'm trying to compile PaStiX but there are configure errors for CMake. These packages don't provide a CMake config file as part of their installation:
```
-- Building for target x86_64
-- Found target X86_64
CMake Warning at CMakeLists.txt:184 (find_package):
By not providing "FindCBLAS.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "CBLAS", but
CMake did not find one.
Could not find a package configuration file provided by "CBLAS" with any of
the following names:
CBLASConfig.cmake
cblas-config.cmake
Add the installation prefix of "CBLAS" to CMAKE_PREFIX_PATH or set
"CBLAS_DIR" to a directory containing one of the above files. If "CBLAS"
provides a separate development package or SDK, be sure it has been
installed.
CMake Warning at CMakeLists.txt:190 (find_package):
By not providing "FindLAPACKE.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "LAPACKE", but
CMake did not find one.
Could not find a package configuration file provided by "LAPACKE" with any
of the following names:
LAPACKEConfig.cmake
lapacke-config.cmake
Add the installation prefix of "LAPACKE" to CMAKE_PREFIX_PATH or set
"LAPACKE_DIR" to a directory containing one of the above files. If
"LAPACKE" provides a separate development package or SDK, be sure it has
been installed.
CMake Warning at CMakeLists.txt:209 (find_package):
By not providing "FindHWLOC.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "HWLOC", but
CMake did not find one.
Could not find a package configuration file provided by "HWLOC" with any of
the following names:
HWLOCConfig.cmake
hwloc-config.cmake
Add the installation prefix of "HWLOC" to CMAKE_PREFIX_PATH or set
"HWLOC_DIR" to a directory containing one of the above files. If "HWLOC"
provides a separate development package or SDK, be sure it has been
installed.
```
Is there a workaround for this other than contacting the maintainers of these libraries and requesting CMake support?https://gitlab.inria.fr/solverstack/pastix/-/issues/31Compilation errors for Windows (MSYS environment)2019-07-08T12:40:00+02:00RAMET PierreCompilation errors for Windows (MSYS environment)```
$ cmake -G "MSYS Makefiles" -DPASTIX_INT64=OFF ..
-- Building for target AMD64
-- Found target X86_64
-- A cache variable, namely CBLAS_DIR, has been set to specify the install directory of CBLAS
-- In FindBLASEXT
-- If you want to f...```
$ cmake -G "MSYS Makefiles" -DPASTIX_INT64=OFF ..
-- Building for target AMD64
-- Found target X86_64
-- A cache variable, namely CBLAS_DIR, has been set to specify the install directory of CBLAS
-- In FindBLASEXT
-- If you want to force the use of one specific library,
please specify the BLAS vendor by setting -DBLA_VENDOR=blas_vendor_name
at cmake configure.
-- List of possible BLAS vendor: Goto, ATLAS PhiPACK, CXML,
DXML, SunPerf, SCSL, SGIMATH, IBMESSL, IBMESSLMT, Intel10_32 (intel mkl v10 32 bit),
Intel10_64lp (intel mkl v10 64 bit, lp thread model, lp64 model),
Intel10_64lp_seq (intel mkl v10 64 bit, sequential code, lp64 model),
Intel( older versions of mkl 32 and 64 bit),
ACML, ACML_MP, ACML_GPU, Apple, NAS, Generic
-- A cache variable, namely BLAS_DIR, has been set to specify the install directory of BLAS
-- Looking for BLAS -- mkl.h not found
-- Looking for MKL BLAS: not found
-- Looking for Goto BLAS: not found
-- Looking for Fortran sgemm
-- Looking for Fortran sgemm - found
-- Looking for Open BLAS: found
-- A library with BLAS API found.
-- BLAS_LIBRARIES C:/Octave/octave-4.2.2/lib/libopenblas.dll.a
-- BLAS sequential libraries stored in BLAS_SEQ_LIBRARIES
-- Looking for cblas_dscal
-- Looking for cblas_dscal - found
-- Looking for cblas: test with blas succeeds
-- cblas:
-- A cache variable, namely LAPACKE_DIR, has been set to specify the install directory of LAPACKE
-- In FindLAPACKEXT
-- A cache variable, namely LAPACK_DIR, has been set to specify the install directory of LAPACK
-- Looking for Fortran CHEEV
-- Looking for Fortran CHEEV - found
-- Looking for LAPACK in BLAS: found
-- A library with LAPACK API found.
-- LAPACK_LIBRARIES C:/Octave/octave-4.2.2/lib/libopenblas.dll.a
-- LAPACK sequential libraries stored in LAPACK_SEQ_LIBRARIES
-- Looking for LAPACKE_dgeqrf
-- Looking for LAPACKE_dgeqrf - found
-- Looking for lapacke: test with lapack succeeds
-- lapacke:
-- A cache variable, namely HWLOC_DIR, has been set to specify the install directory of HWLOC
-- Checking for one of the modules 'hwloc'
-- Looking for HWLOC - not found using PkgConfig.
Perhaps you should add the directory containing hwloc.pc to
the PKG_CONFIG_PATH environment variable.
-- Looking for HWLOC - PkgConfig not used
-- Looking for hwloc_topology_init
-- Looking for hwloc_topology_init - found
-- A cache variable, namely SCOTCH_DIR, has been set to specify the install directory of SCOTCH
-- Looking for SCOTCH_graphInit
-- Looking for SCOTCH_graphInit - found
-- Performing Test SCOTCH_Num_4
-- Performing Test SCOTCH_Num_4 - Success
-- Performing Test SCOTCH_Num_8
-- Performing Test SCOTCH_Num_8 - Failed
-- Scotch inlude dirs: C:/Octave/octave-4.2.2/include
-- Checking for one of the modules 'gtg'
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/spm
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/spm - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/spm
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/spm - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/bcsc
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/bcsc - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/bcsc
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/bcsc - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/kernels
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/kernels - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/kernels
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/kernels - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/refinement
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/refinement - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0 - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0 - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0 - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0 - Done
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/test
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/test - Done
-- A cache variable, namely TMG_DIR, has been set to specify the install directory of TMG
-- Looking for Fortran dlarnv
-- Looking for Fortran dlarnv - found
-- Looking for Fortran dlagsy
-- Looking for Fortran dlagsy - found
-- Looking for tmg: test with lapack succeeds
-- Found TMG: C:/Octave/octave-4.2.2/lib/libopenblas.dll.a
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/test
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/test - Done
You have called ADD_LIBRARY for library lowrank_test without any source files. This typically indicates a problem with your CMakeLists.txt file
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/test
-- Generate precision dependencies in C:/Users/octave-user/install/pastix-6.0.0/test - Done
-- --- Python wrapper is disabled with static libraries
-- Configuring done
CMake Error: Cannot determine link language for target "lowrank_test".
CMake Error: CMake can not determine linker language for target: lowrank_test
-- Generating done
-- Build files have been written to: C:/Users/octave-user/install/pastix-6.0.0/build
$~/install/pastix-6.0.0/build
$ make
[ 0%] Built target refinement_headers_tgt
[ 0%] Built target sopalin_headers
[ 0%] Built target kernels_headers_tgt
[ 0%] Built target spm_headers_tgt
[ 1%] Building C object spm/CMakeFiles/pastix_spm.dir/spm.c.obj
In file included from C:/Users/octave-user/install/pastix-6.0.0/include/pastix.h:38:0,
from C:/Users/octave-user/install/pastix-6.0.0/common/common.h:22,
from c:/Users/octave-user/install/pastix-6.0.0/spm/spm.c:19:
C:/Users/octave-user/install/pastix-6.0.0/include/pastix/nompi.h: In function 'pastix_nompi_copy':
C:/Users/octave-user/install/pastix-6.0.0/include/pastix/nompi.h:146:45: error: 'int32_t' undeclared (first use in this function); did you mean 'off32_t'?
memcpy(dst, src, count * sizeof(int32_t));
^~~~~~~
off32_t
C:/Users/octave-user/install/pastix-6.0.0/include/pastix/nompi.h:146:45: note: each undeclared identifier is reported only once for each function it appears in
C:/Users/octave-user/install/pastix-6.0.0/include/pastix/nompi.h:149:45: error: 'int64_t' undeclared (first use in this function); did you mean 'int32_t'?
memcpy(dst, src, count * sizeof(int64_t));
^~~~~~~
int32_t
In file included from c:/Users/octave-user/install/pastix-6.0.0/spm/spm.c:19:0:
C:/Users/octave-user/install/pastix-6.0.0/common/common.h: In function 'pastix_setenv':
C:/Users/octave-user/install/pastix-6.0.0/common/common.h:135:12: warning: implicit declaration of function 'setenv'; did you mean 'getenv'? [-Wimplicit-function-declaration]
return setenv( var, value, overwrite );
^~~~~~
getenv
c:/Users/octave-user/install/pastix-6.0.0/spm/spm.c: At top level:
c:/Users/octave-user/install/pastix-6.0.0/spm/spm.c:23:10: fatal error: c_spm.h: No such file or directory
#include "c_spm.h"
^~~~~~~~~
compilation terminated.
make[2]: *** [spm/CMakeFiles/pastix_spm.dir/spm.c.obj] Error 1
make[1]: *** [spm/CMakeFiles/pastix_spm.dir/all] Error 2
make: *** [all] Error 2
```RAMET PierreRAMET Pierrehttps://gitlab.inria.fr/solverstack/pastix/-/issues/32Compilation error for pastix+mpi2018-08-08T14:27:22+02:00MARAIT GillesCompilation error for pastix+mpiWhen trying to install pastix,
```
spack install pastix+mpi
```
I have a compilation Error:
```
[ 9%] Building C object spm/CMakeFiles/spm.dir/src/p_spm.c.o
cd /tmp/test/spack-stage/spack-stage-tMmrK3/pastix/spack-build/spm && /home/...When trying to install pastix,
```
spack install pastix+mpi
```
I have a compilation Error:
```
[ 9%] Building C object spm/CMakeFiles/spm.dir/src/p_spm.c.o
cd /tmp/test/spack-stage/spack-stage-tMmrK3/pastix/spack-build/spm && /home/test/Installed/newspack/lib/spack/env/gcc/gcc -DCBLAS_HAS_CGEMM3M -DCBLAS_HAS_ZGEMM3M -Dspm_EXPORTS -I/home/test/pkgspack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hwloc-1.11.9-gjjmk2lkg6s7ezidzrcljkh64rqilhp2/include -I/home/test/pkgspack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libpciaccess-0.13.5-5urc6tcjae26fbbd2wyfohoszhgxtbmc/include -I/home/test/pkgspack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libxml2-2.9.8-wpexsphdmfayxqxd4up5vgwuqgu5woo7/include/libxml2 -I/home/test/pkgspack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.1-lmdzeojhveb4utlzwacuxrjc5zvrx5jq/include -I/home/test/pkgspack/linux-ubuntu16.04-x86_64/gcc-5.4.0/scotch-6.0.6-p5xne3pyxkuvl7sx4pz2tmuapg7yg4jh/include -I/home/test/Installed/newspack/var/spack/stage/pastix-solverstack-x33wutyrehx4ubaime3aoguitnfnpbam/pastix/spm/include -I/home/test/Installed/newspack/var/spack/stage/pastix-solverstack-x33wutyrehx4ubaime3aoguitnfnpbam/pastix/spm/src -I/tmp/test/spack-stage/spack-stage-tMmrK3/pastix/spack-build/spm/src -Wall -Wextra -mcx16 -O2 -g -DNDEBUG -g3 -g3 -fPIC -fexceptions;-pthread -DPRECISION_p -UPRECISION_s -UPRECISION_d -UPRECISION_c -UPRECISION_z -o CMakeFiles/spm.dir/src/p_spm.c.o -c /tmp/test/spack-stage/spack-stage-tMmrK3/pastix/spack-build/spm/src/p_spm.c
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status
/bin/sh: 1: -pthread: not found
```
There is a `-fexceptions;-pthread` somewhere in the command line causing the error. Looks like a cmake error, where the flags `MPI_CXX_COMPILE_OPTIONS`, `MPI_C_COMPILE_OPTIONS` and `MPI_Fortran_COMPILE_OPTIONS` are set to -fexceptions;-pthread and then written directly in the compilation line.
NB: It does not occur when using pastix~mpi (default).https://gitlab.inria.fr/solverstack/pastix/-/issues/33unterminated #if in common/isched_nohwloc.c2018-09-11T09:48:35+02:00Xavier Lacosteunterminated #if in common/isched_nohwloc.c[ 81%] Building C object CMakeFiles/pastix.dir/common/isched_nohwloc.c.o
/appli_RD/LACOSTE/OMEGA/cmakesuperbuild/build-gnu/pastix/src/pastix_project/common/isched_nohwloc.c:20:0: error: unterminated #if
#if defined(PASTIX_HAVE_SCHED_SET...[ 81%] Building C object CMakeFiles/pastix.dir/common/isched_nohwloc.c.o
/appli_RD/LACOSTE/OMEGA/cmakesuperbuild/build-gnu/pastix/src/pastix_project/common/isched_nohwloc.c:20:0: error: unterminated #if
#if defined(PASTIX_HAVE_SCHED_SETAFFINITY)
I just added the #endif and it builds.https://gitlab.inria.fr/solverstack/pastix/-/issues/34pastixFinalize (from Fortran) not threadsafe2018-10-30T12:53:34+01:00Andrea PiacentinipastixFinalize (from Fortran) not threadsafeThis piece of code is working
```
IF ( self%ll_set) THEN
DO ib_mat = 1, self%il_nbmat
CALL pastixFinalize (self%sla_mat(ib_mat)%pastix_data)
END DO
EN...This piece of code is working
```
IF ( self%ll_set) THEN
DO ib_mat = 1, self%il_nbmat
CALL pastixFinalize (self%sla_mat(ib_mat)%pastix_data)
END DO
END IF
!$omp parallel num_threads(NBTHDS_B) default(none), &
!$omp shared(self, NBTHDS_B), &
!$omp private(ib_thr, ib_mat, il_gmat), &
!$omp private(matrix, sys, il_info)
!$omp do
DO ib_thr = 1, NBTHDS_B
DO ib_mat = 1, self%il_nblocmat(ib_thr)
il_gmat = self%ila_glomat(ib_mat,ib_thr)
matrix => self%sla_mat(il_gmat)
matrix%iparm(:) = self%iparm(:)
matrix%dparm(:) = self%dparm(:)
CALL pastixInit( matrix%pastix_data, 0, &
& matrix%iparm, matrix%dparm)
CALL spmConvert(SpmCSC, matrix%spm, il_info)
CALL pastix_task_analyze( matrix%pastix_data, matrix%spm, il_info )
CALL pastix_task_numfact( matrix%pastix_data, matrix%spm, il_info )
sys => self%sla_sys(il_gmat)
sys%idmat = ib_mat
sys%nrhs = 1
END DO
END DO
!$omp end do
!$omp end parallel
```
while this one (notice that pastixFinalize is inside the OpenMP loop while beforehand it was in a separate previous singlethreaded loop) ends in error with a double corrupted
```
!$omp parallel num_threads(NBTHDS_B) default(none), &
!$omp shared(self, NBTHDS_B), &
!$omp private(ib_thr, ib_mat, il_gmat), &
!$omp private(matrix, sys, il_info)
!$omp do
DO ib_thr = 1, NBTHDS_B
DO ib_mat = 1, self%il_nblocmat(ib_thr)
il_gmat = self%ila_glomat(ib_mat,ib_thr)
matrix => self%sla_mat(il_gmat)
matrix%iparm(:) = self%iparm(:)
matrix%dparm(:) = self%dparm(:)
IF ( self%ll_set ) CALL pastixFinalize ( matrix%pastix_data )
CALL pastixInit( matrix%pastix_data, 0, &
& matrix%iparm, matrix%dparm)
CALL spmConvert(SpmCSC, matrix%spm, il_info)
CALL pastix_task_analyze( matrix%pastix_data, matrix%spm, il_info )
CALL pastix_task_numfact( matrix%pastix_data, matrix%spm, il_info )
sys => self%sla_sys(il_gmat)
sys%idmat = ib_mat
sys%nrhs = 1
END DO
END DO
!$omp end do
!$omp end parallel
```
Thankshttps://gitlab.inria.fr/solverstack/pastix/-/issues/35pastix_task_analyze (from fortran) not perfectly threadsafe2019-07-08T11:38:16+02:00Andrea Piacentinipastix_task_analyze (from fortran) not perfectly threadsafeAfter a large number of runs we spotted some conflicts in concurrent `pastix_task_analyze` call on different OpenMP threads.
Adding `!$omp critical` just around this call made the code much more stable.After a large number of runs we spotted some conflicts in concurrent `pastix_task_analyze` call on different OpenMP threads.
Adding `!$omp critical` just around this call made the code much more stable.https://gitlab.inria.fr/solverstack/pastix/-/issues/36Problem when compiling Pastix 6.0.12018-11-28T13:15:31+01:00Mathieu FavergeProblem when compiling Pastix 6.0.1Hi,
I'm trying to install Pastix 6.0.1 on our cluster and I've some troubles.
I've (successfully?) installed hwloc 2.02 and scotch 6.0.6 (with -fPIC flag as it was later required by pastix)
I've ran cmake as it:
cmake -DCMAKE_BUILD_TYPE...Hi,
I'm trying to install Pastix 6.0.1 on our cluster and I've some troubles.
I've (successfully?) installed hwloc 2.02 and scotch 6.0.6 (with -fPIC flag as it was later required by pastix)
I've ran cmake as it:
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/gmgec/mrgm/arteta/libs_DAIMON/pastix-6.0.1 -DHWLOC_DIR:PATH=/home/gmgec/mrgm/arteta/libs_DAIMON/hwloc-2.0.2/ -DSCOTCH_DIR:PATH=/home/gmgec/mrgm/arteta/libs_DAIMON/scotch-6.0.6/ -DPASTIX_INT64=OFF .
And then, when I run "make", at 32%, I got this error (among other):
/home/gmgec/mrgm/arteta/libs_DAIMON/sources/pastix-6.0.1/kernels/core_cgemdm.c(137): error: identifier "pastix_trans_t" is undefined
core_cgemdm( pastix_trans_t transA, pastix_trans_t transB,
What I'm doing wrong?
Many thanks
JoaquimMathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/37Using pastix just as a solver2022-04-12T10:56:32+02:00Mathieu FavergeUsing pastix just as a solverHello,
I am interested in using Pastix as a solver in another software which does the other required things. Therefore, I would like to give pastix the CSR matrix, the rhs, the solver settings and would like to get the solution vector a...Hello,
I am interested in using Pastix as a solver in another software which does the other required things. Therefore, I would like to give pastix the CSR matrix, the rhs, the solver settings and would like to get the solution vector as output.
I looked into the examples step-by-step.c and simple.c and the others as well and went through the documentation a little, but I am a little confused on the following aspects:
1. I would like to set solver settings separately inside my code and not from the command line through iparm and dparm. I am not sure how to do this exactly.
2. I would like to use solve using the gpu. Could you please point me to how I can do that? What should I set in pastix_data that allows me to do that ?
3. I also would like to factor my CSR matrix on the CPU and then transfer the factors to the GPU for the solve. Can you please give me pointers for how I can do this ?
I am very grateful for your help.
Thank you,
Patrik.Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/38fmultilap tests (and applications) hang with OpenBLAS and OpenMP2019-07-17T15:46:25+02:00Andrea Piacentinifmultilap tests (and applications) hang with OpenBLAS and OpenMPDear everybody,
I've tried to install OpenBLAS with the USE_OPENMP=1 option and to compile pastix versus it (`-DBLA_VENDOR=Open -DBLAS_DIR=xxx`).
I probably made something wrong, but while other applications can use OpenBLAS, the fmult...Dear everybody,
I've tried to install OpenBLAS with the USE_OPENMP=1 option and to compile pastix versus it (`-DBLA_VENDOR=Open -DBLAS_DIR=xxx`).
I probably made something wrong, but while other applications can use OpenBLAS, the fmultilap tests hang (both _seq and _mt) unless I set OMP_NUM_THREADS=1.
The real size application that inspired fmultilap behaves exactly the same.
Thanks
APhttps://gitlab.inria.fr/solverstack/pastix/-/issues/39Check reordered matrix2022-04-12T10:56:12+02:00Ghost UserCheck reordered matrixHello,
I am using pastix as a solver, and I would like to
see the reordered matrix. Is there a way to do so ?
Thank you.Hello,
I am using pastix as a solver, and I would like to
see the reordered matrix. Is there a way to do so ?
Thank you.https://gitlab.inria.fr/solverstack/pastix/-/issues/40dump_rank.c is not working for compress_min_width 1 and compress_min_height 12020-02-26T15:12:30+01:00KORKMAZ Esraguldump_rank.c is not working for compress_min_width 1 and compress_min_height 1Hello,
When I run "./dump_rank -t 1 -f 1 --lap 40:40:40:4.:1. -i iparm_compress_min_width 1 -i iparm_compress_min_height 1" it gives a segmentation error. I guess there is an unhandled limit point issue for the compress size 1.
Thank ...Hello,
When I run "./dump_rank -t 1 -f 1 --lap 40:40:40:4.:1. -i iparm_compress_min_width 1 -i iparm_compress_min_height 1" it gives a segmentation error. I guess there is an unhandled limit point issue for the compress size 1.
Thank you,
Esragulhttps://gitlab.inria.fr/solverstack/pastix/-/issues/41Problem in testing step-by-step_dist.c2022-04-12T10:55:52+02:00Ghost UserProblem in testing step-by-step_dist.cHello,
I have been working on the solver part of the Jorek code and we are planning to use distributed pastix for the same. In order to understand that I started with the example program step-by-step_dist.c as provided with the pastix ...Hello,
I have been working on the solver part of the Jorek code and we are planning to use distributed pastix for the same. In order to understand that I started with the example program step-by-step_dist.c as provided with the pastix package. There seems to be a bug in the matrix given with the program (column and row pointers are not consistent for mpid !== 0), I am thus using my own example as given in the attached .c file. It gives many errors which I am not able to understand (please see the file out).
Kindly help me out at your earliest convenience.
Thanks and regards,
Prabal
[out](/uploads/bb89a5ae264fe2260127ae1840e66efd/out)[step-by-step_dist.c](/uploads/6174c384699b415b2cc5df54250fa7df/step-by-step_dist.c)https://gitlab.inria.fr/solverstack/pastix/-/issues/42Update spm2019-11-01T19:26:01+01:00Ghost UserUpdate spmPlease update spm to commit `ada4963b` to fix [this](https://gitlab.inria.fr/solverstack/spm/issues/6).Please update spm to commit `ada4963b` to fix [this](https://gitlab.inria.fr/solverstack/spm/issues/6).https://gitlab.inria.fr/solverstack/pastix/-/issues/43Iterative refinement returns NaN at the first iteration2020-02-26T16:00:56+01:00Ghost UserIterative refinement returns NaN at the first iterationIf I run the following test in Octave and iterative refinement is enabled, the result is NaN. Without iterative refinement the result is OK.
[pastix_test.tar.bz2](/uploads/13feb314531f911ce29a49b363eb9b11/pastix_test.tar.bz2)If I run the following test in Octave and iterative refinement is enabled, the result is NaN. Without iterative refinement the result is OK.
[pastix_test.tar.bz2](/uploads/13feb314531f911ce29a49b363eb9b11/pastix_test.tar.bz2)https://gitlab.inria.fr/solverstack/pastix/-/issues/44Const correctness2020-02-28T16:14:40+01:00Ghost UserConst correctnessAre the function parameters marked const where applicable? For example, does the refine task really need to modify the right hand side?Are the function parameters marked const where applicable? For example, does the refine task really need to modify the right hand side?https://gitlab.inria.fr/solverstack/pastix/-/issues/45Race condition with OpenBLAS2019-11-11T12:35:51+01:00Ghost UserRace condition with OpenBLASI've been tracking down a strange issue with multithreading with PaStiX and the use of OpenBLAS. In my standalone script testing a SPD matrix PaStiX seems to work without issue. However, using OpenBLAS causes explosions. Reproduction ...I've been tracking down a strange issue with multithreading with PaStiX and the use of OpenBLAS. In my standalone script testing a SPD matrix PaStiX seems to work without issue. However, using OpenBLAS causes explosions. Reproduction on my machines is fairly straightforward (Fedora 30 with AMD / Intel processors).
1) Given matrix and right hand side [sparse_matrix.mtx](/uploads/8ba8780324d451afcd7a1b31952ce390/sparse_matrix.mtx) [right_hand_side.mtx](/uploads/930dcb1df41ceb791ec2336298df1833/right_hand_side.mtx)
2) Everything seems OK [check_matrix.py](/uploads/c203116eef15599ff54e82e78b001d8a/check_matrix.py)
3) My ghetto standalone script [pastix_test.cpp](/uploads/82cdd4ac577243ca8808f65fc8e3cba0/pastix_test.cpp)
4) Compile with `g++` and link `-lpastix -lspm` and run
5) Compile with `g++` and link `-lopenblas -lpastix -lspm` and run
Step 4 matches the results from step 2. Step 5 returns garbage (NaN / incorrect results).
Is this a known issue with OpenBLAS (v0.3.7) or the manner in which PaStiX calls it? MUMPS pulls in OpenBLAS without issue (although only uses it in a single threaded fashion...).
Thoughts?
Potentially related to #43.https://gitlab.inria.fr/solverstack/pastix/-/issues/47Build fails because header files (c_spm.h, z_spm.h, d_spm.h, s_spm.h) not found2020-02-26T15:20:15+01:00Ghost UserBuild fails because header files (c_spm.h, z_spm.h, d_spm.h, s_spm.h) not foundDear Pastix developers,
With Pastix 6.1.0 I get compilation errors in the following files:
```
c_bcsc_tests.c
z_bcsc_tests.c
d_bcsc_tests.c
s_bcsc_tests.c
```
The problem can be fixed by changing the following line for example in c_bscs...Dear Pastix developers,
With Pastix 6.1.0 I get compilation errors in the following files:
```
c_bcsc_tests.c
z_bcsc_tests.c
d_bcsc_tests.c
s_bcsc_tests.c
```
The problem can be fixed by changing the following line for example in c_bscs_tests.c from `#include <c_spm.h>` to `#include <../spm/include/c_spm.h>`.
It looks like a missing -I directive in CFLAGS.https://gitlab.inria.fr/solverstack/pastix/-/issues/48Multiple definitions of `cpublok_zcompress` in v6.1.02020-02-26T16:01:42+01:00Ghost UserMultiple definitions of `cpublok_zcompress` in v6.1.0Linking PaStiX gives me an error:
```
/usr/bin/ld: CMakeFiles/pastix_kernels.dir/cpucblk_scompress.c.o: in function `cpublok_zcompress':
cpucblk_scompress.c:(.text+0xb4): multiple definition of `cpublok_zcompress'; CMakeFiles/pastix_ker...Linking PaStiX gives me an error:
```
/usr/bin/ld: CMakeFiles/pastix_kernels.dir/cpucblk_scompress.c.o: in function `cpublok_zcompress':
cpucblk_scompress.c:(.text+0xb4): multiple definition of `cpublok_zcompress'; CMakeFiles/pastix_kernels.dir/cpucblk_ccompress.c.o:cpucblk_ccompress.c:(.text+0xb4): first defined here
/usr/bin/ld: CMakeFiles/pastix_kernels.dir/cpucblk_zcompress.c.o: in function `cpublok_zcompress':
cpucblk_zcompress.c:(.text+0xb4): multiple definition of `cpublok_zcompress'; CMakeFiles/pastix_kernels.dir/cpucblk_ccompress.c.o:cpucblk_ccompress.c:(.text+0xb4): first defined here
/usr/bin/ld: CMakeFiles/pastix_kernels.dir/cpucblk_dcompress.c.o: in function `cpublok_zcompress':
cpucblk_dcompress.c:(.text+0xb4): multiple definition of `cpublok_zcompress'; CMakeFiles/pastix_kernels.dir/cpucblk_ccompress.c.o:cpucblk_ccompress.c:(.text+0xb4): first defined here
collect2: error: ld returned 1 exit status
```
These functions are indeed there and have been defined multiple times (perhaps they were meant to be `cpublok_Tcompress` or marked static instead?).https://gitlab.inria.fr/solverstack/pastix/-/issues/49Pastix for MPI2020-02-28T16:15:44+01:00Ghost UserPastix for MPIHi,
We are trying to find an effective sparse symmetric solver, but for use with MPI. The Pastix readme says "Distributed memory: PASTIX_WITH_MPI=\[OFF\]: Distributed memory is not supported yet in PaStiX, however you might need to ena...Hi,
We are trying to find an effective sparse symmetric solver, but for use with MPI. The Pastix readme says "Distributed memory: PASTIX_WITH_MPI=\[OFF\]: Distributed memory is not supported yet in PaStiX, however you might need to enable this option if your PaRSEC library has been compiled with MPI support."
Does this mean that Pastix itself doesn't use MPI? Another resolved issue has a comment by Ramet Pierre saying "Do you really need to compile PaStiX with MPI ? You know that MPI won't be supported before release 6.1". Does this mean that the latest Pastix version *DOES* use MPI?
Thanks.https://gitlab.inria.fr/solverstack/pastix/-/issues/50Linking problem when trying to install PaStiX2022-04-12T10:54:52+02:00Giorgio GiorgianiLinking problem when trying to install PaStiXI am trying to install PaStiX using cmake as a part of a bigger project.
I am on Ubuntu 18.04.4 LTS. I set up the dependencies using
sudo apt-get install cmake gcc gfortran libhwloc-dev libscotch-dev libopenblas-dev liblapacke-dev pyt...I am trying to install PaStiX using cmake as a part of a bigger project.
I am on Ubuntu 18.04.4 LTS. I set up the dependencies using
sudo apt-get install cmake gcc gfortran libhwloc-dev libscotch-dev libopenblas-dev liblapacke-dev python-numpy
Here are the cmake options that I am using.
'
if(ENABLE_PASTIX) <br />
<br />
cache_package_cmake_config_dir(EBC-Project EBC)<br />
<br />
ExternalProject_Add(lapacke<br />
GIT_REPOSITORY https://github.com/Reference-LAPACK/lapack.git <br />
GIT_TAG origin/master <br />
INSTALL_DIR ${STAGED_INSTALL_PREFIX} <br />
CMAKE_ARGS <br />
-DCMAKE_Fortran_COMPILER=${CMAKE_Fortran_COMPILER} <br />
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER} <br />
-DLAPACKE=ON <br />
-DCBLAS=ON <br />
-DCMAKE_INSTALL_PREFIX=<INSTALL_DIR> <br />
-DCMAKE_INSTALL_MESSAGE=${CMAKE_INSTALL_MESSAGE} <br />
CMAKE_CACHE_ARGS <br />
-DCMAKE_MODULE_PATH:PATH=${CMAKE_MODULE_PATH} <br />
-DEBC_COMPILE_DEFINITIONS:STRING=${EBC_COMPILE_DEFINITIONS} <br />
-DINSTALL_LIBDIR:PATH=${INSTALL_LIBDIR} <br />
-DINSTALL_BINDIR:PATH=${INSTALL_BINDIR} <br />
-DINSTALL_INCLUDEDIR:PATH=${INSTALL_INCLUDEDIR} <br />
-DINSTALL_MODDIR:PATH=${INSTALL_MODDIR} <br />
-DEBC_DIR:PATH=${EBC_DIR} <br />
)<br />
<br />
ExternalProject_Add(pastix <br />
GIT_REPOSITORY https://gitlab.inria.fr/solverstack/pastix.git <br />
GIT_TAG origin/master <br />
INSTALL_DIR ${STAGED_INSTALL_PREFIX} <br />
CMAKE_ARGS <br />
-DUPDATE_DISCONNECTED=${PASTIX_NOUPDATE} <br />
-DCMAKE_Fortran_COMPILER=${CMAKE_Fortran_COMPILER} <br />
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER} <br />
-DCMAKE_C_STANDARD=99 <br />
-DPASTIX_WITH_FORTRAN=ON <br />
-DPASTIX_WITH_MPI=ON <br />
-DPASTIX_ORDERING_SCOTCH=ON <br />
-DPASTIX_INT64=OFF <br />
-DCMAKE_INSTALL_PREFIX=<INSTALL_DIR> <br />
-DCMAKE_INSTALL_MESSAGE=${CMAKE_INSTALL_MESSAGE} <br />
CMAKE_CACHE_ARGS <br />
-DCMAKE_MODULE_PATH:PATH=${CMAKE_MODULE_PATH} <br />
-DEBC_COMPILE_DEFINITIONS:STRING=${EBC_COMPILE_DEFINITIONS} <br />
-DINSTALL_LIBDIR:PATH=${INSTALL_LIBDIR} <br />
-DINSTALL_BINDIR:PATH=${INSTALL_BINDIR} <br />
-DINSTALL_INCLUDEDIR:PATH=${INSTALL_INCLUDEDIR} <br />
-DINSTALL_MODDIR:PATH=${INSTALL_MODDIR} <br />
-DEBC_DIR:PATH=${EBC_DIR} <br />
DEPENDS lapacke <br />
) <br />
'
Build fails with the following message:
'
[ 88%] Built target pastix_lrtests <br />
[ 88%] Linking C executable d_rradd_tests <br />
libpastix_lrtests.so: undefined reference to 'LAPACKE_dlatms_work' <br />
libpastix_lrtests.so: undefined reference to 'LAPACKE_slatms_work' <br />
libpastix_lrtests.so: undefined reference to 'LAPACKE_clatms_work' <br />
libpastix_lrtests.so: undefined reference to 'LAPACKE_zlatms_work' <br />
collect2: error: ld returned 1 exit status <br />
test/CMakeFiles/d_rradd_tests.dir/build.make:121: recipe for target 'test/d_rradd_tests' failed <br />
make[5]: *** [test/d_rradd_tests] Error 1 <br />
CMakeFiles/Makefile2:2166: recipe for target 'test/CMakeFiles/d_rradd_tests.dir/all' failed <br />
make[4]: *** [test/CMakeFiles/d_rradd_tests.dir/all] Error 2 <br />
Makefile:140: recipe for target 'all' failed <br />
make[3]: *** [all] Error 2 <br />
CMakeFiles/pastix.dir/build.make:112: recipe for target 'subprojects/Stamp/pastix/pastix-build' failed <br />
make[2]: *** [subprojects/Stamp/pastix/pastix-build] Error 2 <br />
CMakeFiles/Makefile2:196: recipe for target 'CMakeFiles/pastix.dir/all' failed <br />
make[1]: *** [CMakeFiles/pastix.dir/all] Error 2 <br />
Makefile:129: recipe for target 'all' failed <br />
'https://gitlab.inria.fr/solverstack/pastix/-/issues/51pastixf.mod not built when mpi is turned on2020-05-12T19:22:07+02:00Giorgio Giorgianipastixf.mod not built when mpi is turned onI tried to build the latest Pastix downloaded from gitlab (submodule updated) using mpi:
`cmake .. -DPASTIX_ORDERING_SCOTCH=OFF -DPASTIX_WITH_MPI=ON`
Here is the list of modules built:
giorgio@giorgio-HP-ZBook-14-G2:~/libs/testpasti...I tried to build the latest Pastix downloaded from gitlab (submodule updated) using mpi:
`cmake .. -DPASTIX_ORDERING_SCOTCH=OFF -DPASTIX_WITH_MPI=ON`
Here is the list of modules built:
giorgio@giorgio-HP-ZBook-14-G2:~/libs/testpastix2/pastix/build/mod_files$ l <br />
total 16 <br />
-rw-r--r-- 1 giorgio giorgio 3056 mai 12 19:02 spm_enums.mod <br />
-rw-r--r-- 1 giorgio giorgio 8975 mai 12 19:02 spmf.mod <br />
When I build it without mpi `cmake .. -DPASTIX_ORDERING_SCOTCH=OFF`
I got the following modules built:
giorgio@giorgio-HP-ZBook-14-G2:~/libs/testpastix2/pastix/build/mod_files$ l <br />
total 44 <br />
-rw-r--r-- 1 giorgio giorgio 3056 mai 12 18:57 spm_enums.mod <br />
-rw-r--r-- 1 giorgio giorgio 8975 mai 12 18:57 spmf.mod <br />
-rw-r--r-- 1 giorgio giorgio 6174 mai 12 18:57 pastix_enums.mod <br />
-rw-r--r-- 1 giorgio giorgio 20184 mai 12 18:57 pastixf.mod <br />
Can you help?
Thanks, Giorgiohttps://gitlab.inria.fr/solverstack/pastix/-/issues/52spm/const.h and spm/datatypes.h are installed in the wrong folder2021-10-28T15:29:06+02:00Ghost Userspm/const.h and spm/datatypes.h are installed in the wrong folderDear PaStiX developers,
If I compile pastix from source, and install it below /usr/local/, then spm/const.h and spm/datatypes.h are installed in /usr/local/include/ instead of /usr/local/include/spm/. This causes a compilation error if ...Dear PaStiX developers,
If I compile pastix from source, and install it below /usr/local/, then spm/const.h and spm/datatypes.h are installed in /usr/local/include/ instead of /usr/local/include/spm/. This causes a compilation error if pastix.h is included in a user program. If I move both files to /usr/local/include/spm/, everything works as expected.https://gitlab.inria.fr/solverstack/pastix/-/issues/53mingw Pastix 6.2.02021-04-19T18:30:55+02:00Tony Delaruemingw Pastix 6.2.0I made pastix 6.2.0 package for mingw (msys2), Base Package: mingw-w64-pastix - MSYS2 Packages witout Metis.
With Metis I have this error:
-- Configuration is done - A summary of the current configuration
has been written in C:/msy...I made pastix 6.2.0 package for mingw (msys2), Base Package: mingw-w64-pastix - MSYS2 Packages witout Metis.
With Metis I have this error:
-- Configuration is done - A summary of the current configuration
has been written in C:/msys64/usr/local/pkg_pastix/src/build-x86_64-w64-mingw32-static/config.log
-- Configuring done
CMake Error at CMakeLists.txt:707 (add_library):
Target "pastix" links to target "MORSE::METIS" but the target was not
found. Perhaps a find_package() call is missing for an IMPORTED target, or
an ALIAS target is missing?
-- Generating done
I may add that version 6.1.0 don't have this error.
Best regards
RafalMathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/54Problem when building Pastix with cmake (release 6.2) : undefined references ...2021-05-25T16:32:11+02:00DURUFLE MarcProblem when building Pastix with cmake (release 6.2) : undefined references during the compilation of example_mdof2.c.oHi, I tried to install the release 6.2.0 of Pastix. I executed the following cmake command :
cmake . -DPASTIX_INT64=OFF -DPASTIX_WITH_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/durufle/Solve/pastix/build
The command succeeded and then I compil...Hi, I tried to install the release 6.2.0 of Pastix. I executed the following cmake command :
cmake . -DPASTIX_INT64=OFF -DPASTIX_WITH_MPI=ON -DCMAKE_INSTALL_PREFIX=/home/durufle/Solve/pastix/build
The command succeeded and then I compiled with :
make install
The installation stops at 78% with the following errors :
```
[ 78%] Building C object spm/examples/CMakeFiles/example_mdof2.dir/example_mdof2.c.o
[ 78%] Linking C executable example_mdof2
/usr/bin/ld: ../src/libspm.a(spm.c.o): in function `spmSort':
/home/durufle/Solve/pastix/spm/src/spm.c:549: undefined reference to `z_spmSort'
/usr/bin/ld: ../src/libspm.a(spm.c.o): in function `spmMergeDuplicate':
/home/durufle/Solve/pastix/spm/src/spm.c:603: undefined reference to `z_spmMergeDuplicate'
/usr/bin/ld: ../src/libspm.a(spm.c.o): in function `spmMatMat':
/home/durufle/Solve/pastix/spm/src/spm.c:1131: undefined reference to `spm_zspmm'
/usr/bin/ld: ../src/libspm.a(spm.c.o): in function `spm2Dense':
/home/durufle/Solve/pastix/spm/src/spm.c:434: undefined reference to `z_spm2dense'
/usr/bin/ld: ../src/libspm.a(spm.c.o): in function `spmNorm':
/home/durufle/Solve/pastix/spm/src/spm.c:503: undefined reference to `z_spmNorm'
/usr/bin/ld: ../src/libspm.a(spm.c.o): in function `spmPrint':
/home/durufle/Solve/pastix/spm/src/spm.c:919: undefined reference to `z_spmPrint'
/usr/bin/ld: ../src/libspm.a(spm.c.o): in function `spmExpand':
/home/durufle/Solve/pastix/spm/src/spm.c:961: undefined reference to `z_spmExpand'
/usr/bin/ld: ../src/libspm.a(spm.c.o): in function `spmMatVec':
/home/durufle/Solve/pastix/spm/src/spm.c:1035: undefined reference to `spm_zspmv'
/usr/bin/ld: ../src/libspm.a(spm.c.o): in function `spmScalMatrix':
/home/durufle/Solve/pastix/spm/src/spm.c:1261: undefined reference to `z_spmScal'
/usr/bin/ld: ../src/libspm.a(spm.c.o):(.data.rel.ro+0x18): undefined reference to `z_spmGenMat'
/usr/bin/ld: ../src/libspm.a(spm.c.o):(.data.rel.ro+0x38): undefined reference to `z_spmCheckAxb'
/usr/bin/ld: ../src/libspm.a(spm.c.o):(.data.rel.ro+0x98): undefined reference to `z_spmConvertCSC2CSR'
/usr/bin/ld: ../src/libspm.a(spm.c.o):(.data.rel.ro+0xc8): undefined reference to `z_spmConvertCSC2IJV'
/usr/bin/ld: ../src/libspm.a(spm.c.o):(.data.rel.ro+0xf8): undefined reference to `z_spmConvertCSR2CSC'
/usr/bin/ld: ../src/libspm.a(spm.c.o):(.data.rel.ro+0x158): undefined reference to `z_spmConvertCSR2IJV'
/usr/bin/ld: ../src/libspm.a(spm.c.o):(.data.rel.ro+0x188): undefined reference to `z_spmConvertIJV2CSC'
/usr/bin/ld: ../src/libspm.a(spm.c.o):(.data.rel.ro+0x1b8): undefined reference to `z_spmConvertIJV2CSR'
```
I tried by cloning the git repository (instead of release 6.2.0), but I obtained the same error.However, I tried to compile my own code with the following line :
```
mpicxx -I. -DSELDON_WITH_PASTIX -DSELDON_WITH_MPI test/program/direct_test.cpp -I/home/durufle/Solve/pastix-6.2.0/include -L/home/durufle/Solve/pastix-6.2.0 -lpastix -L/home/durufle/Solve/pastix-6.2.0/spm/src -lspm -L/home/durufle/Solve/pastix-6.2.0/kernels -lpastix_kernels -L/home/durufle/Solve/scotch_6.0.4 -lscotch -lscotcherr -I/home/durufle/Solve/pastix-6.2.0/spm/include -lhwloc -llapacke -lblas -lrt -lpthread
```
I obtained other undefined references that I do not reproduce here. In the documentation I did not find how to compile pastix as a library (do we need to link with -lspm or -lpastix_kernels ?). I encountered a page with the compilation command, but I did not succeed in finding that page again.https://gitlab.inria.fr/solverstack/pastix/-/issues/57pkg-config --static (mingw/msys2)2023-12-11T20:06:30+01:00Rafal brzegowypkg-config --static (mingw/msys2)If I use the command `pkg-config --libs --static pastix` then I have this output:
`-LC:/msys64/mingw64/lib -lpastix -lpastix_kernels -lpastix_starpu -lC:/msys64/mingw64/lib/libm.a -lC:/msys64/mingw64/lib/libopenblas.dll.a -lspm C:/msys6...If I use the command `pkg-config --libs --static pastix` then I have this output:
`-LC:/msys64/mingw64/lib -lpastix -lpastix_kernels -lpastix_starpu -lC:/msys64/mingw64/lib/libm.a -lC:/msys64/mingw64/lib/libopenblas.dll.a -lspm C:/msys64/mingw64/lib/libopenblas.dll.a\;C:/msys64/mingw64/lib/libm.a\;C:/msys64/mingw64/lib/libopenblas.dll.a -lstarpu-1.3 -lpthread -lssp -g0 -lws2_32 -lpthread -LD:/a/msys64/mingw64/lib -lm -lgdi32 -lltdl -lpthread -lhwloc -lm -lgdi32 -lltdl -lpthread -LC:/msys64/mingw64/lib -lspm C:/msys64/mingw64/lib/libopenblas.dll.a\;C:/msys64/mingw64/lib/libm.a\;C:/msys64/mingw64/lib/libopenblas.dll.a`
Why is the `libopenblas.dll.a` version referenced for static linking rather than the usual `-lopenblas` (i.e. the libopenblas.a full static link)?PaStiX 6.3.2https://gitlab.inria.fr/solverstack/pastix/-/issues/59pastix 6.2.2 and clang (msys2/mingw)2023-12-11T20:29:35+01:00Rafal brzegowypastix 6.2.2 and clang (msys2/mingw)I am trying to compile PaStiX with clang but I have this error:
```
[311/854] Building Fortran object spm/wrappers/fortran90/CMakeFiles/spmf.dir/src/spmf.f90.obj
FAILED: spm/wrappers/fortran90/CMakeFiles/spmf.dir/src/spmf.f90.obj mod_fi...I am trying to compile PaStiX with clang but I have this error:
```
[311/854] Building Fortran object spm/wrappers/fortran90/CMakeFiles/spmf.dir/src/spmf.f90.obj
FAILED: spm/wrappers/fortran90/CMakeFiles/spmf.dir/src/spmf.f90.obj mod_files/spmf.mod
C:\msys64\clang64\bin\flang.exe -IC:\msys64\usr\local\pkg_pastix\src\pastix-6.2.2\spm\wrappers
\fortran90\src -IC:/msys64/usr/local/pkg_pastix/src/pastix-6.2.2/spm/include -IC:/msys64/usr/l
ocal/pkg_pastix/src/pastix-6.2.2/spm/src -IC:/msys64/usr/local/pkg_pastix/src/build-x86_64-w64
-mingw32-static/spm/include -IC:/msys64/usr/local/pkg_pastix/src/build-x86_64-w64-mingw32-stat
ic/spm/src -O2 -module-dirmod_files -c spm/wrappers/fortran90/CMakeFiles/spmf.dir/src/spmf.f90
-pp.f90 -o spm/wrappers/fortran90/CMakeFiles/spmf.dir/src/spmf.f90.obj
error: loc("./spm/wrappers/fortran90/CMakeFiles/spmf.dir/src/spmf.f90-pp.f90":605:7): C:/M/min
gw-w64-flang/src/flang-15.0.7.src/lib/Lower/IntrinsicCall.cpp:1656: not yet implemented: intri
nsic module procedure: c_loc
requested type: (!fir.ref<!fir.type<_QMspmfTspmatrix_t{mtxtype:i32,flttype:i32,fmttype:i32,bas
eval:i32,gn:i32,n:i32,gnnz:i32,nnz:i32,gnexp:i32,nexp:i32,gnnzexp:i32,nnzexp:i32,dof:i32,dofs:
!fir.type<_QM__fortran_builtinsT__builtin_c_ptr{__address:i64}>,layout:i32,colptr:!fir.type<_Q
M__fortran_builtinsT__builtin_c_ptr{__address:i64}>,rowptr:!fir.type<_QM__fortran_builtinsT__b
uiltin_c_ptr{__address:i64}>,loc2glob:!fir.type<_QM__fortran_builtinsT__builtin_c_ptr{__addres
s:i64}>,values:!fir.type<_QM__fortran_builtinsT__builtin_c_ptr{__address:i64}>,glob2loc:!fir.t
ype<_QM__fortran_builtinsT__builtin_c_ptr{__address:i64}>,clustnum:i32,clustnbr:i32,comm:!fir.
type<_QMspm_enumsTmpi_comm{mpi_comm:i32}>}>>) -> !fir.type<_QM__fortran_builtinsT__builtin_c_p
tr{__address:i64}>
[316/854] Generating core_dgemdm.c
```PaStiX 6.3.2https://gitlab.inria.fr/solverstack/pastix/-/issues/60shared build and mingw2023-12-11T20:30:31+01:00Rafal brzegowyshared build and mingwPlease see this:
https://github.com/msys2/MINGW-packages/issues/17236
and
https://github.com/msys2/MINGW-packages/pull/17270Please see this:
https://github.com/msys2/MINGW-packages/issues/17236
and
https://github.com/msys2/MINGW-packages/pull/17270PaStiX 6.3.2Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/61Segfault and Error with GPU-StarPU2023-12-11T14:36:39+01:00Martin LacroixSegfault and Error with GPU-StarPUDear authors, I am encountering an issue while solving different linear systems successively on GPU (starPU scheduler), the latter happens when analyzing the new sparse matrix. I made a minimal example reproducing the problem [Example.zi...Dear authors, I am encountering an issue while solving different linear systems successively on GPU (starPU scheduler), the latter happens when analyzing the new sparse matrix. I made a minimal example reproducing the problem [Example.zip](/uploads/01b69b91b9a682cf421002b7b480dab9/Example.zip). Note that the example works fine on CPU.
- The first issue is a segfault at this line : https://gitlab.inria.fr/solverstack/pastix/-/blob/master/blend/solver.c#L166, I think the latter should be
```cpp
for (i=0;i<solvmtx->ttsknbr;i++)
```
because we are iterating over `solvmtx->ttsktab` in the loop.
- The second issue is a double free at line : https://gitlab.inria.fr/solverstack/pastix/-/blob/master/blend/pastix_subtask_blend.c#L181, it seems that `pastix_data->solvglob` and `pastix_data->solvmatr` are pointing towards the same object, but `memFree_null` is called on both.
- Finally, I am facing a last issue that is not reproduced by this small example. Assuming the two previous problems are solved, the `assert` at line : https://gitlab.inria.fr/solverstack/pastix/-/blob/master/sopalin/starpu/pastix_starpu_interface.c#L502 is not passing on my actual script. However, I do not understand the meaning of this check.PaStiX 6.3.2https://gitlab.inria.fr/solverstack/pastix/-/issues/63pastix 6.3.0 mingw update package2023-11-21T11:08:38+01:00Rafal brzegowypastix 6.3.0 mingw update packageError:
```
-- Generate precision dependencies in C:/_/B/src/pastix-6.3.0 - Done
CMake Error at CMakeLists.txt:809 (add_library):
Syntax error in cmake code when parsing string
common\d_integer.c
Invalid character e...Error:
```
-- Generate precision dependencies in C:/_/B/src/pastix-6.3.0 - Done
CMake Error at CMakeLists.txt:809 (add_library):
Syntax error in cmake code when parsing string
common\d_integer.c
Invalid character escape '\d'.
CMake Error at CMakeLists.txt:809 (add_library):
Syntax error in cmake code when parsing string
common\c_integer.c
```PaStiX 6.3.1https://gitlab.inria.fr/solverstack/pastix/-/issues/64Wrong installation path for Python package2023-12-12T22:30:23+01:00Jakub KlinkovskýWrong installation path for Python packagePastix installs the Python package to [lib/python/pypastix](https://gitlab.inria.fr/solverstack/pastix/-/blob/master/wrappers/python/CMakeLists.txt?ref_type=heads#L50-56). This path is wrong, Python does not look in `lib/python` when imp...Pastix installs the Python package to [lib/python/pypastix](https://gitlab.inria.fr/solverstack/pastix/-/blob/master/wrappers/python/CMakeLists.txt?ref_type=heads#L50-56). This path is wrong, Python does not look in `lib/python` when importing modules. The correct path should be something like `lib/python3.11/site-packages/pypastix`, depending on the Python interpreter version. This command may help:
python -c "import site; print(site.getsitepackages()[0])"
/usr/lib/python3.11/site-packagesPaStiX 6.3.2Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/65Skip installation of pastix_env.sh and pastix_completion.sh2023-11-23T09:58:02+01:00Jakub KlinkovskýSkip installation of pastix_env.sh and pastix_completion.shI'm packaging [pastix](https://aur.archlinux.org/packages/pastix) in the AUR and there are two annoyances with these scripts:
- `pastix_env.sh` does not make sense when `CMAKE_INSTALL_PREFIX` is set to `/usr` - all the paths are used by...I'm packaging [pastix](https://aur.archlinux.org/packages/pastix) in the AUR and there are two annoyances with these scripts:
- `pastix_env.sh` does not make sense when `CMAKE_INSTALL_PREFIX` is set to `/usr` - all the paths are used by default
- `pastix_completion.sh` then remains as the only thing that pastix installs to `/bin`, which makes it useless. Even if it had a purpose, the correct install path for bash-specific completion scripts is `/usr/share/bash-completion/completions/` - see https://github.com/scop/bash-completion