maphys issueshttps://gitlab.inria.fr/solverstack/maphys/maphys/-/issues2018-04-14T10:02:19+02:00https://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/13Maphys with multithreading example2018-04-14T10:02:19+02:00Ghost UserMaphys with multithreading exampleI am trying ro run maphys dmph_examplethreadkv example with the following input settings:
```
# cmment
MATFILE = bcsstk17.mtx
SYM = 1 #0 (General) 1 (SPD) 2 (symmetric)
ICNTL(1) = 1 #Controls w...I am trying ro run maphys dmph_examplethreadkv example with the following input settings:
```
# cmment
MATFILE = bcsstk17.mtx
SYM = 1 #0 (General) 1 (SPD) 2 (symmetric)
ICNTL(1) = 1 #Controls where to write error messages. def:0 stderr
ICNTL(2) = 1 #Controls where to write warning messages def:0 stderr
ICNTL(3) = 6 # where are written statistics messages def: 6:stdout
ICNTL(4) = 5 #Controls where to write stat.msg 1-4,def: print eors,warnings &detailled statistics:3-> 5:print every thing
ICNTL(5) = 1 #Controls when to print list of controls (Xcntl).Default : 0.never print.1:begining,2:each step
ICNTL(6) = 1 #Controls when to print list of informations (Xinfo).default:0:Never print.1:begining,2:each step
ICNTL(7) = 4 #Partitioning strategy (1:METIS-NODEND 2:METIS-EDGEND 3: METIS-NODEWND 4:SCOTCH-CUSTOM)old value :4
ICNTL(8) = -1 #level of filling for L and U in the ILUT method.Default : -1.imp if ICNTL(30)->2
ICNTL(9) = -1 #level of filling for the Schur complement in the ILUT method .Default : -1.imp if ICNTL(30)->2
ICNTL(10) = 0
ICNTL(11) = 0
ICNTL(12) = 0
ICNTL(13) = 2 #(P)fact. & the the precd. direct solver.1:mumps. 2:pastix .3:Use multiple sparse direct solvers.see ICNTL(15,32)
ICNTL(14) = 0 #Output format.Default : 0 stdout,1:emak
ICNTL(15) = 2 #(P)direct solver for preconditioner1:mumps.2:pastix.3:multiple.see ICNTL(15,32)
ICNTL(16) = 0
ICNTL(17) = 0
ICNTL(18) = 0
ICNTL(19) = 1
ICNTL(20) = 2 #(P)3rd party iterative solver0:unset.1:gmres.2:CG. 3:automatic.def:3
ICNTL(21) = 1 # preconditioner strategy.# (1:local DENSE 2:local SPARSE 4: No preconditioner) values:1,2,3,4,5,10
ICNTL(22) = 0 #(P)Controls the iterative solver0:modGS,1:iter.selGS,2classicalGS,3:iter GS. def:3
ICNTL(23) = 0 #Controls whether the user wishes to supply an initial guess.0:no,1:yes. def:0
ICNTL(24) = 10000 #(P) Iterative Solver - Maximum number of itrs
ICNTL(25) = 0 #strategy to compute the residual. 0:recurrent,1:residual.->irrelevant when iterative solver is CG (ICNTL(20) = 2,3).
ICNTL(26) = 500 #(P)Iterative Solver - GMRES: restart every X iterations.gnored if solver is CG (ICNTL(20) = 2,3 with SPD
ICNTL(27) = 0 # Iterative Solver - SCHUR Complement Matrix/Vector product. # ( 1:EXPLICIT 2:IMPLICIT )
ICNTL(28) = 1 #scaled residual is computed.def:1
ICNTL(29) = 1 #mode of the iterative solver FABULOUS.def:1
ICNTL(30) = 0 #how to compute schur its approx.def:0:shur returned by sparse drct slvr package.->2:Sparse approx.bsd on partl ILU(t;p)shld set ICNTL(8,9),RCNTL(8,9)
ICNTL(31) = 50
ICNTL(32) = 2 #(P)direct solver for local schur factorisation.def:ICNTL(13),see ICNTL(13,15)
ICNTL(33) = 10 #Number of eigenvalues per subdomain.def:10.Ignored if ICNTL(21)!=10.
ICNTL(34) = 0 #convergence history of the iterative solver is written a file.def:0 regular output.1->file is named gmres cvg N.dat or cg cvg N.dat
ICNTL(35) = 0
ICNTL(36) = 2 #How to bind thread inside MAPHYS.0 :nobind.1:Thread to core bind.2:Grouped bind.def:0.eg:smph_examplethread.
ICNTL(37) = 1 #(P)2 level parlsm,pecifies the number of nodes.only useful if ICNTL(42) > 0.def:1
ICNTL(38) = 1 #(P)2 level parlsm,specifies no.of cores per node. It is only useful if ICNTL(42) > 0.def:1
ICNTL(39) = 4 #(P)2 level parlsm,specifies the number of threads(process) per domains.only useful if ICNTL(42) > 0.def:1
ICNTL(40) = 4 #(P)2 level parlsm,specifies the number of domains.It is only useful if ICNTL(42) > 0.def:1
ICNTL(41) = 0
ICNTL(42) = 1 #(imp)0->mpi only,1:multitheading-> level of parallelism def:0,1->multithreading.shld set ICNTL(37,38,39)
ICNTL(43) = 1 #input system (central.on the host, distributed, ...)def:1.eg:smph_examplerestart->3.paddle->2(experimental).
ICNTL(44) = 0 #When activated, it means user permutation you want MAPHYS to use.def:0.eg: xmph_exampledistkv in examples
ICNTL(45) = 0 #local output after analysis. If set to 1, it allows to perform a dump of the local matrices. def:0
ICNTL(46) = 0
ICNTL(47) = 20 #Controls the MUMPS instance.def:20.
ICNTL(48) = 10 #Controls FABULOUS Deflated Restart algorithm. de:20
ICNTL(49) = 1 #controls the choice of domain decp library/algorithm. Warning : Modifies the behavior of ICNTL(43).def:1: maphys.2:paddle
ICNTL(50) = 0 #When MUMPS error indicates that requires more memory workspace, def:0
RCNTL(1) = 0.000E+00
RCNTL(2) = 0.000E+00 #is the target for FABULOUS Deflated Restart algorithm.def:0.0
RCNTL(3) = 0.000E+00 # sets the value of α for custom stopping criteria of GMRes and CG(ICNTL(28) = 2).def:0
RCNTL(4) = 0.000E+00 #sets the value of β for custom stopping criteria of GMRes and CG (ICNTL(28) = 2). def:0
RCNTL(5) = 2.000E+00 #mumps:gives the amount by which the extra workspace(initially given by ICNTL(47)) will be multiplied for the next try. def:2.0
RCNTL(6) = 0.000E+00
RCNTL(7) = 0.000E+00
RCNTL(8) = 0.000E+00 #thrsld used to sparsify the LU factors while using PILUT.def:0.0.imp if ICNTL(30)->2
RCNTL(9) = 0.000E+00 #thrsld used to sparsify the schur compl. while computing it with PILUT.def:0.0.imp if ICNTL(30)->2
RCNTL(10) = 0.000E+0
RCNTL(11) = 1.0e-6 # Preconditioner - local SPARSE - Sparsifying tolerance(imp. if ICNTL(21)->2)
RCNTL(12) = 1.000E-02
RCNTL(13) = 0.000E+00
RCNTL(14) = 0.000E+00
RCNTL(15) = 2.000E-01 #Specifies the imbalance tolerance used in Scotch partitionner to create the subdomains. def:0.2
RCNTL(16) = 0.000E+00
RCNTL(17) = 0.000E+00
RCNTL(18) = 0.000E+00
RCNTL(19) = 0.000E+00
RCNTL(20) = 0.000E+00
RCNTL(21) = 1e-12 #(P) Iterative Solver - Convergence criteria
~
```
but I got this error:
```
mpirun -n 4 ./dmph_examplethreadkv thread.in
The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
This is disallowed by the MPI standard.
Your MPI job will now abort.
The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
This is disallowed by the MPI standard.
Your MPI job will now abort.
[sariyer:10096] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
This is disallowed by the MPI standard.
Your MPI job will now abort.
[sariyer:10097] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
This is disallowed by the MPI standard.
Your MPI job will now abort.
[sariyer:10098] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
[sariyer:10095] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[30023,1],1]
Exit code: 1
```
any idea is highly appreciatedhttps://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/12Matrix input ijv coordinate format2018-03-29T19:50:06+02:00Ghost UserMatrix input ijv coordinate formatHello MaPHys developers,
I want to use matrix ijv format as matrix input for MaPHys solver because it is the format used by other solvers.
I am using matrices from Florida sparse collection. I used the following matlab code to convert...Hello MaPHys developers,
I want to use matrix ijv format as matrix input for MaPHys solver because it is the format used by other solvers.
I am using matrices from Florida sparse collection. I used the following matlab code to convert matlab matrix format into ijv format:
```
function convertToIJV(file_name,S)
[i,j,v] = find(S);
file_id = fopen(file_name,'wt');
% the first line for number of rows , number of columns and nnz
fprintf(file_id,'%d %d %d\n', size(S,1) ,size(S,2),size(v,1) );
fprintf(file_id,'%d %d %g\n',[i-1,j-1,v]');
fclose(file_id);
end
```
but I got this error form maphys with two converted matrix:lddor and barrier2-4
```
***************************************
* MaPHyS 0.9.6.0 [ Real (KIND=8) ] *
***************************************
* Starting Job1 (Analysis)
==========================
ERROR: graphCheck: arc data do not match
```
This is the first 20 lines of barrier2-4 matrix:
```
113076 113076 2129496
0 0 38.3734
1 0 -0.0861911
3 0 -0.0861911
4 0 -19.1005
36747 0 -1.4092
36770 0 0.0316341
36773 0 -0.0367471
36775 0 2.2903e-05
36777 0 2.2903e-05
36778 0 0.00506717
74923 0 -175199
74926 0 1.44618e+07
74928 0 -63788.4
74930 0 -63788.4
74931 0 -1.4159e+07
0 1 -0.0861911
1 1 76.7469
2 1 -0.172382
6 1 -38.201
```
One more question please, If the matrix is symmetric in values, do I need to provide only half of it or the whole values using this ijv coordinate format.https://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/11hybrid solver ... test matrices2018-03-28T17:17:13+02:00Ghost Userhybrid solver ... test matricesCould you please suggest me some publically available large sparse unsymmetic matrices that could run on Maphys and Pdslin examples for evaluation purposes.Could you please suggest me some publically available large sparse unsymmetic matrices that could run on Maphys and Pdslin examples for evaluation purposes.https://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/10Running xmph_paddle.F90 example ...Program received signal SIGSEGV: Segmentat...2018-04-19T14:17:15+02:00Ghost UserRunning xmph_paddle.F90 example ...Program received signal SIGSEGV: Segmentation fault - invalid memory reference.I want to run xmph_paddle.F90 since it is the only example that distributes the matrix among processors for local solution (correct me please if wrong). I installed paddle library and compiled the example successfully. However, when I ru...I want to run xmph_paddle.F90 since it is the only example that distributes the matrix among processors for local solution (correct me please if wrong). I installed paddle library and compiled the example successfully. However, when I run :
` mpirun -n 4 ./dmph_paddle.ex real_bcsstk17.in `
I got this error(some printing statements I made). The error is I think inside "DMPH_maphys_driver" function because no printing is done after that:
```
version is :0.9.6.0
arith is :DMPH_FLOAT
prefix is :
matrix is :bcsstk17.mtx
rhs is :
initguess is :
outrhsfile is :
outsolfile is :
nb of domains is :4
Have MPI_THREAD_MULTIPLE
54953 54953 54953 54953
0 54953 109906 164859 219812
DBG 1 3364 3367 3368 3369 3370 3371 3372 3373 3374 3375
after allocating solution: 1
DBG 2 6102 5844 5899 5900 5902 5903 5904 5929 5930 5931
after allocating solution: 2
DBG 3 8373 8374 8375 8376 8377 8378 8379 8380 8381 8382
DBG 0 1 2 3 4 20 21 22 26 27 28
Warning : failed to open rhsfile
after allocating solution: 3
Warning : Generates RHS for x s.t. x_i = 1 for all i
Warning : no initial guess
after allocating solution: 0
after broadcasting controls:rank 0
after broadcasting controls:rank 2
after broadcasting controls:rank 1
before calling DMPH_maphys_driver:rank 0
before calling DMPH_maphys_driver:rank 2
before calling DMPH_maphys_driver:rank 1
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
***************************************
* MaPHyS 0.9.6.0 [ Real (KIND=8) ] *
***************************************
Backtrace for this error:
Backtrace for this error:
* Starting Job1 (Analysis)
==========================
* List of controls
FIELD INDEX VALUE
--
ICNTL 1 0
ICNTL 2 0
ICNTL 3 6
ICNTL 4 5
ICNTL 5 1
ICNTL 6 1
ICNTL 7 4
ICNTL 8 -1
ICNTL 9 -1
ICNTL 10 0
ICNTL 11 0
ICNTL 12 0
ICNTL 13 2
ICNTL 14 0
ICNTL 15 1
ICNTL 16 0
ICNTL 17 0
ICNTL 18 0
ICNTL 19 1
ICNTL 20 2
ICNTL 21 1
ICNTL 22 3
ICNTL 23 0
ICNTL 24 500
ICNTL 25 0
ICNTL 26 500
ICNTL 27 0
ICNTL 28 1
ICNTL 29 1
ICNTL 30 0
ICNTL 31 50
ICNTL 32 1
ICNTL 33 10
ICNTL 34 0
ICNTL 35 0
ICNTL 36 2
ICNTL 37 1
ICNTL 38 1
ICNTL 39 1
ICNTL 40 1
ICNTL 41 0
ICNTL 42 0
ICNTL 43 2
ICNTL 44 0
ICNTL 45 0
ICNTL 46 0
ICNTL 47 20
ICNTL 48 10
ICNTL 49 2
ICNTL 50 0
--
RCNTL 1 -9.999E+03
RCNTL 2 0.000E+00
RCNTL 3 0.000E+00
RCNTL 4 0.000E+00
#0 0x7F2A2E489E08
RCNTL 5 2.000E+00
RCNTL 6 -9.999E+03
RCNTL 7 -9.999E+03
RCNTL 8 0.000E+00
RCNTL 9 0.000E+00
RCNTL 10 -9.999E+03
RCNTL 11 1.000E-06
RCNTL 12 1.000E-02
RCNTL 13 -9.999E+03
RCNTL 14 -9.999E+03
RCNTL 15 2.000E-01
RCNTL 16 -9.999E+03
RCNTL 17 -9.999E+03
RCNTL 18 -9.999E+03
RCNTL 19 -9.999E+03
RCNTL 20 -9.999E+03
RCNTL 21 1.000E-07
* List of controls
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
after broadcasting controls:rank 3
before calling DMPH_maphys_driver:rank 3
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7F1ADE7F6E08
#0 0x7F9BB9E8EE08
#1 0x7F9BB9E8DF90
#2 0x7F9BB9ADE4AF
#3 0x7F9BB4AAA014
#4 0x7F9BBA68C5CF
#5 0x7F9BBA727499
#6 0x7F9BBA728337
#7 0x402EF8 in MAIN__ at maphys-dist.F90:369
#1 0x7F2A2E488F90
#2 0x7F2A2E0D94AF
#3 0x7F2A290A5014
#4 0x7F2A2EC875CF
#5 0x7F2A2ED22499
#6 0x7F2A2ED23337
#7 0x402EF8 in MAIN__ at maphys-dist.F90:369
#1 0x7F1ADE7F5F90
#2 0x7F1ADE4464AF
#3 0x7F1AD9412014
#4 0x7F1ADEFF45CF
#5 0x7F1ADF08F499
#6 0x7F1ADF090337
#0 0x7F80141C1E08
#7 0x402EF8 in MAIN__ at maphys-dist.F90:369
#1 0x7F80141C0F90
#2 0x7F8013E114AF
#3 0x7F800EDDD014
#4 0x7F80149BF5CF
#5 0x7F8014A5A499
#6 0x7F8014A5B337
#7 0x402EF8 in MAIN__ at maphys-dist.F90:369
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node afrah-VirtualBox exited on signal 11 (Segmentation fault).
```
Any ideahttps://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/9Fatal error : Can't open module file ‘dmph_maphys_mod.mod’2018-03-25T05:14:25+02:00Ghost UserFatal error : Can't open module file ‘dmph_maphys_mod.mod’I want to call Maphys from my code ... I installed Maphys with spack and used the examples as a starting point namely, xmph_examplekv.F90 and just changed the fist lines to use single real(D) :
```
Program DMPH_example_ff
!* Modules ...I want to call Maphys from my code ... I installed Maphys with spack and used the examples as a starting point namely, xmph_examplekv.F90 and just changed the fist lines to use single real(D) :
```
Program DMPH_example_ff
!* Modules & co. *!
Use DMPH_maphys_mod
Use DMPH_sparse_matrix_mod, Only : &
DMPH_sm_free
Use DMPH_toolkit_mod
Implicit None
Include 'mpif.h'
```
and the rest is the same as the example. This is my makefile:
```
OPENMPI_DIR =/spackMaphys/spack/opt/spack/linux-x86_64/gcc-5.4.0/openmpi-2.0.1-5ypph4eab66ku63udgkqjcqedjy7cwee
MAPHYS_DIR =/spackMaphys/spack/opt/spack/linux-x86_64/gcc-5.4.0/maphys-0.9.6-2avvu6jyqe7ozagr2wrpogf224d4wjtj
OBJF = testMAPHYS1-Fortran.o
linklibMAPHYS:=-L$(MAPHYS_DIR)/lib -lmaphys -lpackcg -lpackgmres -lslatec -lmph_unittest
linklibTOOLKIT:=-L$(MAPHYS_DIR)/lib -lmph_toolkit
DEBUG = -g -DDEBUG_M
IMPI = -I$(OPENMPI_DIR)/include # Additional MPI include path
LMPI = -L$(OPENMPI_DIR)/lib # Additional MPI linker flags
FFLAGS = # Additional Fortran compiler flags
FOPTFLAGS = -O3 #
INCLUDE = -I$(MAPHYS_DIR)/include
LIBHIPS = $(linklibMAPHYS) $(linklibTOOLKIT)
MPIFC = mpif90
MPIFC_OPT = $(IMPI) $(FFLAGS) $(FOPTFLAGS) $(INCLUDE) $(DEBUG)
LD_OPT = -lio -lm $(LIBHIPS) $(LMPI)
MPILD = $(MPIFC)
MPILD_OPT = $(LD_OPT)
default: testMAPHYS1-Fortran.ex
testMAPHYS1-Fortran.o:maphys1.F90
$(MPIFC) $(MPIFC_OPT) $< -c -o $@
testMAPHYS1-Fortran.ex:$(OBJF) $(LIBHIPS)
$(MPILD) $^ $(MPILD_OPT) -o $@
```
but I got this error when typing make:
```
$ make
mpif90 -I/spackMaphys/spack/opt/spack/linux-x86_64/gcc-5.4.0/openmpi-2.0.1-5ypph4eab66ku63udgkqjcqedjy7cwee/include -O3 -I/spackMaphys/spack/opt/spack/linux-x86_64/gcc-5.4.0/maphys-0.9.6-2avvu6jyqe7ozagr2wrpogf224d4wjtj/include -g -DDEBUG_M maphys1.F90 -c -o testHIPS1-Fortran.o
maphys1.F90:28:6:
Use DMPH_maphys_mod
1
Fatal Error: Can't open module file ‘dmph_maphys_mod.mod’ for reading at (1): No such file or directory
compilation terminated.
Makefile:28: recipe for target 'testHIPS1-Fortran.o' failed
make: *** [testHIPS1-Fortran.o] Error 1
```
any idea,https://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/8installation with intel compiler2018-04-19T14:17:04+02:00Ghost Userinstallation with intel compilerHi,
I tried to install Maphys with intel comiler and intelmpi library with:
spack install maphys+mumps \^cmake@exist \^scotch@exist \^pastix\@exist\%intel@17.0.4 \^intelmpi \^mkl
but I got this error:
> ==> Installing maphys
> ...Hi,
I tried to install Maphys with intel comiler and intelmpi library with:
spack install maphys+mumps \^cmake@exist \^scotch@exist \^pastix\@exist\%intel@17.0.4 \^intelmpi \^mkl
but I got this error:
> ==> Installing maphys
> ==> Installing pastix
> ==> intelmpi is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/intelmpi-exist-ok6jdbnl6quroeyg2huc5ahtgjisrtat
> ==> hwloc is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/hwloc-1.11.8-b735m4o2k47thieficyorcqq5r6a3ga4
> ==> mkl is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/mkl-exist-sezg44le57cgbp7z577ohzef7dsmte53
> ==> cmake is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/cmake-exist-uyuzmehlh63boykgalyztorgfuy2af6z
> ==> Installing scotch
> ==> flex is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/flex-2.6.0-rjcodh57a42fo7mrufzedjbleyrihpu4
> ==> intelmpi is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/intelmpi-exist-ok6jdbnl6quroeyg2huc5ahtgjisrtat
> ==> bison is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/bison-3.0.4-lts53ko2gfuq6educ7ng46pacbigpkmx
> ==> Trying to fetch from file:///okyanus/users/afarea/spack/maphys-mirror/scotch/scotch-exist.tar.gz
> curl: (37) Couldn't open file /okyanus/users/afarea/spack/maphys-mirror/scotch/scotch-exist.tar.gz
> ==> Fetching from file:///okyanus/users/afarea/spack/maphys-mirror/scotch/scotch-exist.tar.gz failed.
> ==> Trying to fetch from file:/users/spack/spack/var/spack/repos/builtin/packages/fake/empty.tar.gz
> ######################################################################## 100.0%
> ==> Staging archive: /okyanus/users/afarea/spack/spack/var/spack/stage/scotch-exist-ml4xvyshtczye2md5hfiz2gzl6wper7t/empty.tar.gz
> ==> Created stage in /okyanus/users/afarea/spack/spack/var/spack/stage/scotch-exist-ml4xvyshtczye2md5hfiz2gzl6wper7t
> ==> No patches needed for scotch
> ==> Building scotch
> ==> Successfully installed scotch
> Fetch: 0.05s. Build: 0.94s. Total: 0.99s.
> [+] /users/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/scotch-exist-ml4xvyshtczye2md5hfiz2gzl6wper7t
> ==> Trying to fetch from file:///okyanus/users/afarea/spack/maphys-mirror/pastix/pastix-5.2.3.tar.bz2
> ######################################################################## 100.0%
> ==> Staging archive: /okyanus/users/afarea/spack/spack/var/spack/stage/pastix-5.2.3-cklxzrlh4irqit6e4vdheevj4yuaq7zo/pastix-5.2.3.tar.bz2
> ==> Created stage in /okyanus/users/afarea/spack/spack/var/spack/stage/pastix-5.2.3-cklxzrlh4irqit6e4vdheevj4yuaq7zo
> ==> No patches needed for pastix
> ==> Building pastix
> ==> Successfully installed pastix
> Fetch: 0.08s. Build: 56.86s. Total: 56.93s.
> [+] /users/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/pastix-5.2.3-cklxzrlh4irqit6e4vdheevj4yuaq7zo
> ==> hwloc is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/hwloc-1.11.8-b735m4o2k47thieficyorcqq5r6a3ga4
> ==> cmake is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/cmake-exist-uyuzmehlh63boykgalyztorgfuy2af6z
> ==> mkl is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/mkl-exist-sezg44le57cgbp7z577ohzef7dsmte53
> ==> Installing mumps
> ==> Installing metis
> ==> cmake is already installed in /okyanus/users/afarea/spack/spack/opt/spack/linux-x86_64/intel-17.0.4/cmake-exist-uyuzmehlh63boykgalyztorgfuy2af6z
> ==> Trying to fetch from file:///okyanus/users/afarea/spack/maphys-mirror/metis/metis-5.1.0.tar.gz
> curl: (37) Couldn't open file /okyanus/users/afarea/spack/maphys-mirror/metis/metis-5.1.0.tar.gz
> ==> Fetching from file:///okyanus/users/afarea/spack/maphys-mirror/metis/metis-5.1.0.tar.gz failed.
> ==> Trying to fetch from http://glaros.dtc.umn.edu/gkhome/fetch/sw/metis/metis-5.1.0.tar.gz
> ######################################################################## 100.0%
> ==> Staging archive: /okyanus/users/afarea/spack/spack/var/spack/stage/metis-5.1.0-fb2m5lx5hemba6sqnult3z5efojsnize/metis-5.1.0.tar.gz
> ==> Created stage in /okyanus/users/afarea/spack/spack/var/spack/stage/metis-5.1.0-fb2m5lx5hemba6sqnult3z5efojsnize
> ==> Applied patch install_gklib_defs_rename.patch
> ==> Building metis
> ==> Error: Command exited with status 2:
> 'make' '-j28'
>
> See build log for details:
> /tmp/afarea/spack-stage/spack-stage-7vzNSw/metis-5.1.0/spack-build.out
>
> /users/spack/spack/var/spack/repos/builtin/packages/metis/package.py:188, in install:
> 153 @when('@5:')
> 154 def install(self, spec, prefix):
> 155 options = []
> 156 options.extend(std_cmake_args)
> 157
> 158 build_directory = join_path(self.stage.path, 'spack-build')
> 159 source_directory = self.stage.source_path
> 160
> 161 options.append('-DGKLIB_PATH:PATH=%s/GKlib' % source_directory)
> 162 options.append('-DCMAKE_INSTALL_NAME_DIR:PATH=%s/lib' % prefix)
> 163
> 164 if '+shared' in spec:
> 165 options.append('-DSHARED:BOOL=ON')
> 166 if '+debug' in spec:
> 167 options.extend(['-DDEBUG:BOOL=ON',
> 168 '-DCMAKE_BUILD_TYPE:STRING=Debug'])
> 169 if '+gdb' in spec:
> 170 options.append('-DGDB:BOOL=ON')
> 171
> 172 metis_header = join_path(source_directory, 'include', 'metis.h')
> 173 if '+idx64' in spec:
> 174 filter_file('IDXTYPEWIDTH 32', 'IDXTYPEWIDTH 64', metis_header)
> 175 if '+real64' in spec:
> 176 filter_file('REALTYPEWIDTH 32', 'REALTYPEWIDTH 64', metis_header)
> 177
> 178 # Make clang 7.3 happy.
> 179 # Prevents "ld: section __DATA/__thread_bss extends beyond end of file"
> 180 # See upstream LLVM issue https://llvm.org/bugs/show_bug.cgi?id=27059
> 181 # and https://github.com/Homebrew/homebrew-science/blob/master/metis.rb
> 182 if spec.satisfies('%clang@7.3.0'):
> 183 filter_file('#define MAX_JBUFS 128', '#define MAX_JBUFS 24',
> 184 join_path(source_directory, 'GKlib', 'error.c'))
> 185
> 186 with working_dir(build_directory, create=True):
> 187 cmake(source_directory, *options)
> >> 188 make()
> 189 make('install')
> 190
> 191 # now run some tests:
> 192 for f in ['4elt', 'copter2', 'mdual']:
> 193 graph = join_path(source_directory, 'graphs', '%s.graph' % f)
> 194 Executable(join_path(prefix.bin, 'graphchk'))(graph)
> 195 Executable(join_path(prefix.bin, 'gpmetis'))(graph, '2')
> 196 Executable(join_path(prefix.bin, 'ndmetis'))(graph)
> 197
> 198 graph = join_path(source_directory, 'graphs', 'test.mgraph')
> 199 Executable(join_path(prefix.bin, 'gpmetis'))(graph, '2')
> 200 graph = join_path(source_directory, 'graphs', 'metis.mesh')
> 201 Executable(join_path(prefix.bin, 'mpmetis'))(graph, '2')
> 202
> 203 # install GKlib headers, which will be needed for ParMETIS
> 204 GKlib_dist = join_path(prefix.include, 'GKlib')
> 205 mkdirp(GKlib_dist)
> 206 hfiles = glob.glob(join_path(source_directory, 'GKlib', '*.h'))
> 207 for hfile in hfiles:
> 208 install(hfile, GKlib_dist)
> ==> Error: Installation process had nonzero exit code.
>
any idea,https://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/7installation with intelmpi2018-03-21T09:13:08+01:00Ghost Userinstallation with intelmpiHi,
I want to install Maphys with Intelmpi bu I got this error:
$ spack install maphys %intel \^mkl \^intelmpi
==> Error: maphys does not depend on intelmpi
any ideaHi,
I want to install Maphys with Intelmpi bu I got this error:
$ spack install maphys %intel \^mkl \^intelmpi
==> Error: maphys does not depend on intelmpi
any ideahttps://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/5[restored issue] - compatibility maphys - pastix 62017-11-17T08:53:51+01:00PRUVOST Florent[restored issue] - compatibility maphys - pastix 6Is maphys working with pastix 6 ?
cf. https://gitlab.inria.fr/solverstack/pastix
**Warning**: this issue has been restored from backup and can have been changed. For example, all comments have been lost.Is maphys working with pastix 6 ?
cf. https://gitlab.inria.fr/solverstack/pastix
**Warning**: this issue has been restored from backup and can have been changed. For example, all comments have been lost.https://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/3[restored issue] - update cmake_modules2017-09-20T13:58:35+02:00PRUVOST Florent[restored issue] - update cmake_modulesPlease update the cmake modules. I've updated FindSCALAPACK to be able to find scalapack from the last debian binary package libscalapack-mpi-dev. This can be be useful for you
Example:
```
git submodule update --remote cmake_modules/...Please update the cmake modules. I've updated FindSCALAPACK to be able to find scalapack from the last debian binary package libscalapack-mpi-dev. This can be be useful for you
Example:
```
git submodule update --remote cmake_modules/morse_cmake
git commit cmake_modules/morse_cmake -m "update morse_cmake submodule"
git push --recurse-submodules=check
```
**Warning**: this issue has been restored from backup and can have been changed. For example, all comments have been lost.https://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/2Merge C and Fortran maphys interface2017-06-15T15:23:24+02:00MARAIT GillesMerge C and Fortran maphys interface2 different files to define the same stuff!
include/xmph_maphys_type_c.h
include/mph_defs_f.h2 different files to define the same stuff!
include/xmph_maphys_type_c.h
include/mph_defs_f.hMARAIT GillesMARAIT Gilleshttps://gitlab.inria.fr/solverstack/maphys/maphys/-/issues/1Error with zdotc and cdotc when compiling with gfortran+mkl2017-07-28T11:51:43+02:00MARAIT GillesError with zdotc and cdotc when compiling with gfortran+mklWhen MaPHyS is compiled with gfortran using mkl on plafrim2, a segmentation fault can occur.
The error seems to come from the zdotc and cdotc function in mkl.
#5 zmph_test_all () at /home/gmarait/MAPHYS-DEV/spack-build/tests/unittest...When MaPHyS is compiled with gfortran using mkl on plafrim2, a segmentation fault can occur.
The error seems to come from the zdotc and cdotc function in mkl.
#5 zmph_test_all () at /home/gmarait/MAPHYS-DEV/spack-build/tests/unittest/zmph_test_all.F90:28 (at 0x0000000000402fff)
#4 test_utils_mod::test (sub=..., text=..., _text=25) at /home/gmarait/MAPHYS-DEV/tests/unittest/test_utils_mod.F90:107 (at 0x00007ffff7b935ac)
#3 zmph_test_dense_matrix_mod::zmph_test_dm_matvect_unsym () at /home/gmarait/MAPHYS-DEV/spack-build/tests/unittest/zmph_test_dense_matrix_mod.F90:56 (at 0x00007ffff7bbc079)
#2 zmph_test_dense_matrix_mod::zmph_test_dm_matvect (a=..., x=..., b=..., eps=1.0000000036274937e-15) at /home/gmarait/MAPHYS-DEV/spack-build/tests/unittest/zmph_test_dense_matrix_mod.F90:129 (at 0x00007ffff7bbbeea)
#1 zmph_test_fixture_mod::zmph_norm2_diff (a1=..., a2=..., len=5) at /home/gmarait/MAPHYS-DEV/spack-build/tests/unittest/zmph_test_fixture_mod.F90:478 (at 0x00007ffff7bb78cc)
#0 zdotc_ () from /cm/shared/apps/intel/composer_xe/2017.0.020/compilers_and_libraries_2017.0.098/linux/mkl/lib/intel64/libmkl_intel_lp64.so (at 0x00007ffff5f4755f)