PaStiX issueshttps://gitlab.inria.fr/solverstack/pastix/-/issues2018-07-09T13:55:41+02:00https://gitlab.inria.fr/solverstack/pastix/-/issues/28Use of omp critical for ordering subtasks in fmultilap.f902018-07-09T13:55:41+02:00Andrea PiacentiniUse of omp critical for ordering subtasks in fmultilap.f90Keep the whole solve phase in a single OpenMP region, protecting the ordering subtasks with an `OMP CRITICAL` construct.
~~Check if the `bindtab` array is of any use in the analyze+numfact phase.~~ Already done.Keep the whole solve phase in a single OpenMP region, protecting the ordering subtasks with an `OMP CRITICAL` construct.
~~Check if the `bindtab` array is of any use in the analyze+numfact phase.~~ Already done.https://gitlab.inria.fr/solverstack/pastix/-/issues/27Missing spmf in pkg-config for pastixf2018-07-09T12:48:21+02:00Andrea PiacentiniMissing spmf in pkg-config for pastixf`pkg-config --libs pastixf`
answers
`-L/home/pae/daimon/DAIMON_LIB/pastix_6.0.1/lib -lpastixf -lpastix -lpastix_kernels -lpastix -lpastix_kernels -lspm`
missing `-lspmf` (before `-lspm`)`pkg-config --libs pastixf`
answers
`-L/home/pae/daimon/DAIMON_LIB/pastix_6.0.1/lib -lpastixf -lpastix -lpastix_kernels -lpastix -lpastix_kernels -lspm`
missing `-lspmf` (before `-lspm`)https://gitlab.inria.fr/solverstack/pastix/-/issues/26Argument intent mismatch in pastixf (line 644)2018-07-11T15:02:26+02:00Andrea PiacentiniArgument intent mismatch in pastixf (line 644)The first argument `myorder` of `pastixOrderGrid` in wrappers/fortran90/src/pastixf.f90 (line 644) should have `intent(inout)` instead of `intent(in)`. Intel 16 does not accept an intent(in) argument to be passed to `c_f_pointer`The first argument `myorder` of `pastixOrderGrid` in wrappers/fortran90/src/pastixf.f90 (line 644) should have `intent(inout)` instead of `intent(in)`. Intel 16 does not accept an intent(in) argument to be passed to `c_f_pointer`https://gitlab.inria.fr/solverstack/pastix/-/issues/25Memory leak using spmCheckAndCorrect in Fortran2018-06-04T21:54:24+02:00MARAIT GillesMemory leak using spmCheckAndCorrect in FortranWhen calling spmCheckAndCorrect, an spm instance is not freed.
I can see the memory leak using valgrind on the example flaplacian.
https://gitlab.inria.fr/solverstack/pastix/blob/master/wrappers/fortran90/examples/flaplacian.f90#L119
...When calling spmCheckAndCorrect, an spm instance is not freed.
I can see the memory leak using valgrind on the example flaplacian.
https://gitlab.inria.fr/solverstack/pastix/blob/master/wrappers/fortran90/examples/flaplacian.f90#L119
```fortran
call spmCheckAndCorrect( spm, spm2 )
if (.not. c_associated(c_loc(spm), c_loc(spm2))) then
deallocate(rowptr)
deallocate(colptr)
deallocate(values)
spm%rowptr = c_null_ptr
spm%colptr = c_null_ptr
spm%values = c_null_ptr
call spmExit( spm )
spm = spm2
end if
```
```
==16403== 96 bytes in 1 blocks are definitely lost in loss record 189 of 231
==16403== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16403== by 0x612022A: spmCopy (spm.c:762)
==16403== by 0x6120A4D: spmCheckAndCorrect (spm.c:687)
==16403== by 0x516E018: __spmf_MOD_spmcheckandcorrect (spmf.f90:558)
==16403== by 0x401C38: MAIN__ (flaplacian.f90:119)
==16403== by 0x40172C: main (flaplacian.f90:15)
```
I have the same memory leak with MaPHyS.
For some reason it is not the case with fsimple and fstep-by-step, but things are allocated differently so I cannot figure out why the memory leak does not occur there.https://gitlab.inria.fr/solverstack/pastix/-/issues/24Misaligned output of spmCheckAxb (F90 api)2018-03-05T16:13:44+01:00Andrea PiacentiniMisaligned output of spmCheckAxb (F90 api)Since the last merge, the output columns of `spmCheckAxb` called from within the F90 wrapper are mangled.
Here an example (hoping that Gitlab preserve the formatting. The Preview does)
```
|| A ||_1 ...Since the last merge, the output columns of `spmCheckAxb` called from within the F90 wrapper are mangled.
Here an example (hoping that Gitlab preserve the formatting. The Preview does)
```
|| A ||_1 5.350000e+01
max(|| b_i ||_oo) 1.816403e+01
max(|| x_i ||_oo) 5.000000e-01
|| b_0 - A x_0 ||_1 3.644748e-12
|| b_0 - A x_0 ||_1 / (||A||_1 * ||x_0||_oo * eps) 2.770336e+02 (FAILED)
|| b_1 - A x_1 ||_1 3.276452e-12
|| b_1 - A x_1 ||_1 / (||A||_1 * ||x_1||_oo * eps) 2.470496e+02 (FAILED)
max(|| b_i - A x_i ||_1) 3.644748e-12
max(|| b_i - A x_i ||_1 / (||A||_1 * ||x_i||_oo * eps)) 2.770336e+02 (FAILED)
|| x0_0 ||_oo 3.086420e-14
|| x0_0 - x_0 ||_oo / (||x0_0||_oo * eps) 6.172840e+04 (FAILED)
|| x0_1 ||_oo 2.273182e-14
|| x0_1 - x_1 ||_oo / (||x0_1||_oo * eps) 4.559548e+04 (FAILED)
max(|| x0_i ||_oo) 5.000000e-01
max(|| x0_i - x_i ||_oo) 3.086420e-14
max(|| x0_i - x_i ||_oo / || x0_i ||_oo) 6.172840e+04 (FAILED)
```https://gitlab.inria.fr/solverstack/pastix/-/issues/23Output of Factorization, Solve time and GFlops (F90 api)2018-03-05T21:57:15+01:00Andrea PiacentiniOutput of Factorization, Solve time and GFlops (F90 api)On output of the calls
```
sla_lap(ib)%iparm(IPARM_VERBOSE) = PastixVerboseNot
...
! 1- Initialize the parameters and the solver
call pastixInit( sla_lap(ib)%pastix_data, 0, sla_lap(ib)%iparm, sla_lap(ib)%dparm )
! 2- Analyze the...On output of the calls
```
sla_lap(ib)%iparm(IPARM_VERBOSE) = PastixVerboseNot
...
! 1- Initialize the parameters and the solver
call pastixInit( sla_lap(ib)%pastix_data, 0, sla_lap(ib)%iparm, sla_lap(ib)%dparm )
! 2- Analyze the problem
call pastix_task_analyze( sla_lap(ib)%pastix_data, sla_lap(ib)%spm, info )
! 3- Factorize the matrix
call pastix_task_numfact( sla_lap(ib)%pastix_data, sla_lap(ib)%spm, info )
```
The diagnostic prints
```
write(6,*) ' Matrix ', ib
write(6,*) ' Time for analysys ', sla_lap(ib)%dparm(DPARM_ANALYZE_TIME)
write(6,*) ' Pred Time for fact ', sla_lap(ib)%dparm(DPARM_PRED_FACT_TIME)
write(6,*) ' Time for factorization ', sla_lap(ib)%dparm(DPARM_FACT_TIME)
write(6,*) ' GFlops/s for fact ', sla_lap(ib)%dparm(DPARM_FACT_FLOPS)
```
Give systematically null factorization times and very optimistic ;-) GFlops/s
```
Matrix 1
Time for analysys 3.892183303833008E-003
Pred Time for fact 0.115354254012610
Time for factorization 0.000000000000000E+000
GFlops/s for fact 5135859720.59899
```
Notice that, since several factorization run in parallel on OpenMP threads, the verbosity has to be switched off (set to `PastixVerboseNot`) and all the prints are postponed.
For a test, I switched off the parallelization, set the verbosity to `PastixVerboseNo` and interspersed the a posteriori write obtaning
```
+-------------------------------------------------+
Analyse step:
Number of non-zeroes in blocked L 2451183
Fill-in 14.324351
Number of operations in full-rank: LL^t 900.43 MFlops
Prediction:
Model AMD 6180 MKL
Time to factorize 1.220890e-01 s
Time for analyze 3.082991e-03 s
Time for analysys 3.082990646362305E-003
Pred Time for fact 0.122088950728251
+-------------------------------------------------+
Factorization step:
Factorization used: LL^t
Time to initialize internal csc 1.364207e-02 s
Time to initialize coeftab 1.336455e-02 s
Time to factorize 1.121373e-01 s ( 9.89 GFlop/s)
Number of operations 1.11 GFlops
Number of static pivots 17
Time for factorization 0.000000000000000E+000
GFlops/s for fact 10622676294.4213
```
Not tested yet with solution timeshttps://gitlab.inria.fr/solverstack/pastix/-/issues/22Handling of fortran writes and PaStiX generated output2018-06-04T19:35:36+02:00Andrea PiacentiniHandling of fortran writes and PaStiX generated outputI am pretty sure this is a dummy questions for people used to mixed Fortran and C programming.
Yet I am puzzled by the fact that if I introduce in the caller
```
write(6,*) '!------------'
```
statements alternating some PaStiX calls t...I am pretty sure this is a dummy questions for people used to mixed Fortran and C programming.
Yet I am puzzled by the fact that if I introduce in the caller
```
write(6,*) '!------------'
```
statements alternating some PaStiX calls that produce output as
```
call spmPrintInfo
```
or
```
call spmCheckAxb
```
while the output on screen respects the order, if I redirect the output to a file
```
./flaplacian > output_threads8_singlemat_light.out`
```
the lines are mangled as if fortran and C where concurrently and independently writing their output to the same file.
Any hint on how to recover on file what I can correctly see on screen ?
Thank youhttps://gitlab.inria.fr/solverstack/pastix/-/issues/21Pb with Python solver interface2018-02-21T18:44:41+01:00Mathieu FavergePb with Python solver interfaceThe pb reported by @lpoirel is that the the following code in not working after a few iteration:
```
import pypastix as pastix
import scipy.sparse as sps
import numpy as np
# Set the problem
for n in range(5, 100):
print(n)
A =...The pb reported by @lpoirel is that the the following code in not working after a few iteration:
```
import pypastix as pastix
import scipy.sparse as sps
import numpy as np
# Set the problem
for n in range(5, 100):
print(n)
A = sps.spdiags([np.ones(n)*i for i in [4, -1, -1, -1, -1]],
[0, 1, 3, -1, -3], n, n)
x0 = np.ones(n)
b = A.dot(x0)
# Hack to make sure that the mkl is loaded
tmp = np.eye(2).dot(np.ones(2))
# Factorize
solver = pastix.solver(A, verbose=False, thread_nbr=1)
# Solve
x = solver.solve(b, x0=x0, check=True)
solver.finalize()
```
The problem is the corruption of the spm structure that is forwarded to PaStiX.Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/20Reuse of a single factorized matrix for different concurrent solve calls2018-03-06T17:38:39+01:00Andrea PiacentiniReuse of a single factorized matrix for different concurrent solve callsNext step of experiments, leading to new questions:
*Aim:* Factorize once a single matrix, then use it for different solve calls (each possibly with nrhs>1) distributed among OpenMP threads
Questions:
1. Is the first argument of `...Next step of experiments, leading to new questions:
*Aim:* Factorize once a single matrix, then use it for different solve calls (each possibly with nrhs>1) distributed among OpenMP threads
Questions:
1. Is the first argument of `pastix_task_solve` (the `pastix_data_t` structure) input only, or is it modified/updated in the call? Otherwise stated, is `pastix_task_solve` threadsafe w.r.t. the pastix data?
2. If the answer to 1. is "yes", we'd need to run the factorization in a single OpenMP thread, but using all the available cores for PaStiX pthreads
```
iparm(IPARM_THREAD_NBR) = il_ompthr
```
while the solve phase should be single-pthreaded and concurrently run on the OpenMP threads.
How can we modify the `iparm(IPARM_THREAD_NBR)` in the pastix structure after initialization?
3. If we need to iterate around the switch from the pthreaded factorization and the single-pthreaded solve, what is the default value for `iparm(IPARM_SCHEDULER)` in pthreaded sections? (We learned to set it to `PastixSchedSequential` in conjunction to `iparm(IPARM_THREAD_NBR) = 1` to switch off pthreading and avoid interferences with OpenMP, but we do not know to what it has to be set back.https://gitlab.inria.fr/solverstack/pastix/-/issues/19Factorize multiple sparse matrices stored in multi-dimensional Fortran arrays2018-03-06T17:37:52+01:00Andrea PiacentiniFactorize multiple sparse matrices stored in multi-dimensional Fortran arraysThe PaStiX 5 fortran API allowed for the access to matrices stored as columns of a multidimensional array.
As an example, an application could have to choose a given matrix among a predefined set, accordingly to some run-time condition...The PaStiX 5 fortran API allowed for the access to matrices stored as columns of a multidimensional array.
As an example, an application could have to choose a given matrix among a predefined set, accordingly to some run-time condition.
The matrices could be stored with an extra index `self%il_ia(:,:), self%ila_ja(:,:), self%rla_L(:,:)` where the first dimension is the usual storage and the second is the linear system identfier.
In such a case, PaStiX 5 is called for the linear system `il_gsys` by
```
CALL pastix_fortran(self%sla_px(il_gsys)%pastix_data, &
self%sla_px(il_gsys)%pastix_comm, &
self%sla_px(il_gsys)%n, &
self%ila_ia(:,il_gsys),self%ila_ja(:,il_gsys), &
self%rla_L(:,il_gsys), &
self%sla_px(il_gsys)%perm,self%sla_px(il_gsys)%invp, &
self%rla_L(:,il_gsys),self%sla_px(il_gsys)%nrhs, &
self%sla_px(il_gsys)%iparm,self%sla_px(il_gsys)%dparm)
```
It turns out that neither this syntax
```
self%sla_spm%rowptr = c_loc(self%ila_ia(:,il_gsys))
```
nor
```
self%sla_spm%rowptr = c_loc(self%ila_ia(1,il_gsys))
```
lead to correct results.
As a workaround, we plan to rewrite our routines using arrays of derived datatypes
```
type sys_lin
type(pastix_data_t), pointer :: pastix_data
type(pastix_spm_t), pointer :: spm
type(pastix_spm_t), pointer :: spm2
integer(kind=pastix_int_t), dimension(:), pointer :: ila_ia
integer(kind=pastix_int_t), dimension(:), pointer :: ila_ja
complex(kind=c_double_complex), dimension(:), pointer :: rla_L
end type sys_lin
type(sys_lin), dimension(:), allocatable, target :: sla_lap
...
self%sla_lap(ib)%spm%rowptr = c_loc(sla_lap(ib)%ila_ia)
self%sla_lap(ib)%spm%colptr = c_loc(sla_lap(ib)%ila_ja)
self%sla_lap(ib)%spm%values = c_loc(sla_lap(ib)%rla_L)
```https://gitlab.inria.fr/solverstack/pastix/-/issues/18Need of "in place" format conversions2018-06-04T19:27:57+02:00Andrea PiacentiniNeed of "in place" format conversionsFor the sake of memory economy, we used to convert IJV (a.k.a. COO) matrices to the CSC format by a call to the Sparskit
```
SUBROUTINE coocsr_inplace ( n, nnz, job, a, ja, ia, iwk )
```
We wonder if spmConvert works in place or generate...For the sake of memory economy, we used to convert IJV (a.k.a. COO) matrices to the CSC format by a call to the Sparskit
```
SUBROUTINE coocsr_inplace ( n, nnz, job, a, ja, ia, iwk )
```
We wonder if spmConvert works in place or generate a second full spm and replaces the INOUT argument before returning.
Notice that coocsr_inplace only needs an integer work array `iwk(n+1)`Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/17Test cases timing out or failing with intel162018-03-05T11:02:57+01:00Andrea PiacentiniTest cases timing out or failing with intel16With a standard installation under intel 16
```
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/pastix_6.0.0 -DSCOTCH_DIR=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/scotch_6.0.4 -...With a standard installation under intel 16
```
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/pastix_6.0.0 -DSCOTCH_DIR=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/scotch_6.0.4 -DHWLOC_DIR=/home/palm/USERS/andrea/ADOMOCA_LIB/64_intel/hwloc-1.11.3 -DPASTIX_INT64=OFF
```
the following ctests fails on a timeout
example_cg_simple
from example/CTestTestfile.cmake
and
test_hb_spm_convert_tests
test_hb_spm_norm_tests
test_hb_spm_matvec_tests
test_hb_spm_dof_expand_tests
test_hb_spm_dof_norm_tests
test_hb_spm_dof_matvec_tests
from test/CTestTestfile.cmake
and the following fail on a SEGFAULT
The following tests FAILED:
125 - example_hb_simple (SEGFAULT)
127 - example_gmres_simple (SEGFAULT)
128 - example_bicgstab_simple (SEGFAULT)
366 - test_hb_bcsc_norm_tests (SEGFAULT)
367 - test_hb_bcsc_matvec_tests (SEGFAULT)
Totalview indicates that the timeout is reached on
```
__lll_lock_wait_private, FP=7ffd1c8502d0
_L_lock_49, FP=7ffd1c850350
_IO_fgets, FP=7ffd1c850370
readHB_newmat_double, FP=7ffd1c852590
readHB, FP=7ffd1c8525e0
spmReadDriver, FP=7ffd1c852670
main, FP=7ffd1c852880
__libc_start_main, FP=7ffd1c852940
_start, FP=7ffd1c852948
```Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/16Check list of the PaStiX 6 implementation of the CERFACS customized features ...2018-07-23T16:47:15+02:00Andrea PiacentiniCheck list of the PaStiX 6 implementation of the CERFACS customized features in PaStiX 5For the effective integration of a sequential threadsafe version of PaStiX 5 as a routine called from an OpenMP region of a hybrid MPI+OpenMP application we had to customize both the sources and the compilation options of PaStiX 5.
Jus...For the effective integration of a sequential threadsafe version of PaStiX 5 as a routine called from an OpenMP region of a hybrid MPI+OpenMP application we had to customize both the sources and the compilation options of PaStiX 5.
Just to be more than sure that no customization is needed in PaStiX 6, here is a list of what we had to do
* *purely sequential version of PaStiX 5*
This was obtained by setting
```
-DFORCE_NOMPI
-DFORCE_NOSMP
```
and removing
```
-DCUDA_SM_VERSION=...
```
at compilation.
Is it now, simply enough to set `iparm(IPARM_THREAD_NBR) = 1` and `iparm(IPARM_VERBOSE) = PastixVerboseNot` to avoid any interference or rush condition?
* *activation of multiple RHS*
We had to explicitly activate
```
-DMULT_SMX
```
at compilation. I guess this is not necessary anymore (See issue #13).
* *algebra on multiple RHS*
Moreover, working with @faverge on the specific topic, we concluded that using BLAS2 for the operations on the multiple RHS was counterproductive if `nrhs` was actually set to 1. Is the specific case now handled separately?
* *memory management for multiple RHS*
In the same occasion we noticed a great performance improvement if the `STORAGE` mode was activated, which it was NOT by default. How has this aspect been ported to PaStiX 6? Is it a parametered choice ?
* *dependence on the non threadsafe section of Scotch 6.0.4*
A single treatment inside Scotch is not threadsafe. We made it critical by an OpenMP pragma, while in PaStiX 6 it is explicitly handled as atomic. Has this feature been tested in an intensive OpenMP application?
My tests up to 32 threads all passed once, but the bug is not systematic, therefore an extensive validation, also on the impact on performances is required.
By the way, is there any release announcement for a threadsafe Scotch?Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/15Compilation and link options generation tool2018-03-05T21:56:57+01:00Andrea PiacentiniCompilation and link options generation toolTo ease portability of our applications on different platforms where PaStiX is installed with customized compilation options, we relied on the `pastix-conf` tool with the useful `--fc`, `--fcopts` etc options.
Is there any plan to rein...To ease portability of our applications on different platforms where PaStiX is installed with customized compilation options, we relied on the `pastix-conf` tool with the useful `--fc`, `--fcopts` etc options.
Is there any plan to reinstate it?
For the moment, a similar information is contained in
```
build/example/make
```
but only for C applications.
Furthermore, `pkg-config` fails if hwloc was preinstalled and not known in `PKG_CONFIG_PATH`
In the specific case of intel ifort 16.0.4 (using mkl), some of the options turned on by ctest seem not to be strictly necessary, but I wonder if they could become meaningful in some situation.
They are `-f77rtl` as a compilation option, and the
`-lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lz -lm -lrt -lirng -ldecimal -lcilkrts -lstdc++` library links.
Most probably, the latter are installed as standard or default libraries on my test machine (and I am currently unable to remember how to display the full list of standard and default libraries for ifort).Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/14API migration from PaStiX 5 to PaStiX 6 (comprehension check)2018-02-21T18:45:23+01:00Andrea PiacentiniAPI migration from PaStiX 5 to PaStiX 6 (comprehension check)While porting my F90 application from PaStiX 5 to PaStiX 6 I'm not completely sure of all the parameters "translation".
In particular,
* is the old
```
iparm(IPARM_SYM) = API_SYM_YES
```
completely replaced by the spm featur...While porting my F90 application from PaStiX 5 to PaStiX 6 I'm not completely sure of all the parameters "translation".
In particular,
* is the old
```
iparm(IPARM_SYM) = API_SYM_YES
```
completely replaced by the spm feature
```
spm%mtxtype = PastixSymmetric
```
* is the old
```
iparm(IPARM_MATRIX_VERIFICATION) = API_YES
```
equivalent to a beforehand call to
```
call spmCheckAndCorrect( spm, spm2 )
```
* is the old
```
iparm(IPARM_RHS_MAKING) = API_RHS_B
```
equivalent to a beforehand call to
```
call spmGenRHS(
```
* is there any other important new tunable feature that we did not use to have in PaStiX 5?Mathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/13Check multi-RHS and add a CI testing2018-07-23T13:01:15+02:00RAMET PierreCheck multi-RHS and add a CI testingMathieu FavergeMathieu Favergehttps://gitlab.inria.fr/solverstack/pastix/-/issues/12Fortran mangling with icc 17.02018-03-08T15:55:42+01:00KUHN MatthieuFortran mangling with icc 17.0https://gitlab.inria.fr/solverstack/pastix/-/issues/11spm nnz overflow2018-01-11T15:14:48+01:00Ghost Userspm nnz overflowThe current sparse matrix struct uses a pastix_int_t (potentially 32bit) for the nnz count. This could easily overflow a 32 bit integer when the other matrix indices will not. Should this be size_t instead?The current sparse matrix struct uses a pastix_int_t (potentially 32bit) for the nnz count. This could easily overflow a 32 bit integer when the other matrix indices will not. Should this be size_t instead?https://gitlab.inria.fr/solverstack/pastix/-/issues/10Distributed matrix format2018-07-20T15:51:39+02:00Ghost UserDistributed matrix formatI have a question about the distributed matrix format in PaStiX.
Correct me if I'm wrong, but for the previous version is wasn't permitted to have a column split across more than one process, i.e. if the same column appeared in the `loc...I have a question about the distributed matrix format in PaStiX.
Correct me if I'm wrong, but for the previous version is wasn't permitted to have a column split across more than one process, i.e. if the same column appeared in the `loc2glob` vector in more than one process. From the user side it's definitely easier to assemble the stiffness matrix without much thought to the parallel environment and pass in the matrix with local indexing and the `loc2glob` vector during the solution stage. Will this restriction be present in the MPI release of PaStiX 6 too?https://gitlab.inria.fr/solverstack/pastix/-/issues/9MaPHyS + sparse pcd+ Pastix -> error in order_apply_level_order2018-02-19T16:42:28+01:00MARAIT GillesMaPHyS + sparse pcd+ Pastix -> error in order_apply_level_orderWhen using MaPHyS + Pastix with sparse preconditioning, we obtain a memory error in order_apply_level_order.c:276
It seems that the tree has cycles in it. Attached are the 2 spm files of the 2 processes used.[spmfiles.tgz](/uploads/c2f4...When using MaPHyS + Pastix with sparse preconditioning, we obtain a memory error in order_apply_level_order.c:276
It seems that the tree has cycles in it. Attached are the 2 spm files of the 2 processes used.[spmfiles.tgz](/uploads/c2f4d1b37702aac7669ec905435fa6c4/spmfiles.tgz)
NB: when setting iparm(IPARM_TASKS2D_LEVEL) = 0 in MaPHyS, we do not enter this part of the code and the error does not occur.