Maphys with multithreading example
I am trying ro run maphys dmph_examplethreadkv example with the following input settings:
# cmment
MATFILE = bcsstk17.mtx
SYM = 1 #0 (General) 1 (SPD) 2 (symmetric)
ICNTL(1) = 1 #Controls where to write error messages. def:0 stderr
ICNTL(2) = 1 #Controls where to write warning messages def:0 stderr
ICNTL(3) = 6 # where are written statistics messages def: 6:stdout
ICNTL(4) = 5 #Controls where to write stat.msg 1-4,def: print eors,warnings &detailled statistics:3-> 5:print every thing
ICNTL(5) = 1 #Controls when to print list of controls (Xcntl).Default : 0.never print.1:begining,2:each step
ICNTL(6) = 1 #Controls when to print list of informations (Xinfo).default:0:Never print.1:begining,2:each step
ICNTL(7) = 4 #Partitioning strategy (1:METIS-NODEND 2:METIS-EDGEND 3: METIS-NODEWND 4:SCOTCH-CUSTOM)old value :4
ICNTL(8) = -1 #level of filling for L and U in the ILUT method.Default : -1.imp if ICNTL(30)->2
ICNTL(9) = -1 #level of filling for the Schur complement in the ILUT method .Default : -1.imp if ICNTL(30)->2
ICNTL(10) = 0
ICNTL(11) = 0
ICNTL(12) = 0
ICNTL(13) = 2 #(P)fact. & the the precd. direct solver.1:mumps. 2:pastix .3:Use multiple sparse direct solvers.see ICNTL(15,32)
ICNTL(14) = 0 #Output format.Default : 0 stdout,1:emak
ICNTL(15) = 2 #(P)direct solver for preconditioner1:mumps.2:pastix.3:multiple.see ICNTL(15,32)
ICNTL(16) = 0
ICNTL(17) = 0
ICNTL(18) = 0
ICNTL(19) = 1
ICNTL(20) = 2 #(P)3rd party iterative solver0:unset.1:gmres.2:CG. 3:automatic.def:3
ICNTL(21) = 1 # preconditioner strategy.# (1:local DENSE 2:local SPARSE 4: No preconditioner) values:1,2,3,4,5,10
ICNTL(22) = 0 #(P)Controls the iterative solver0:modGS,1:iter.selGS,2classicalGS,3:iter GS. def:3
ICNTL(23) = 0 #Controls whether the user wishes to supply an initial guess.0:no,1:yes. def:0
ICNTL(24) = 10000 #(P) Iterative Solver - Maximum number of itrs
ICNTL(25) = 0 #strategy to compute the residual. 0:recurrent,1:residual.->irrelevant when iterative solver is CG (ICNTL(20) = 2,3).
ICNTL(26) = 500 #(P)Iterative Solver - GMRES: restart every X iterations.gnored if solver is CG (ICNTL(20) = 2,3 with SPD
ICNTL(27) = 0 # Iterative Solver - SCHUR Complement Matrix/Vector product. # ( 1:EXPLICIT 2:IMPLICIT )
ICNTL(28) = 1 #scaled residual is computed.def:1
ICNTL(29) = 1 #mode of the iterative solver FABULOUS.def:1
ICNTL(30) = 0 #how to compute schur its approx.def:0:shur returned by sparse drct slvr package.->2:Sparse approx.bsd on partl ILU(t;p)shld set ICNTL(8,9),RCNTL(8,9)
ICNTL(31) = 50
ICNTL(32) = 2 #(P)direct solver for local schur factorisation.def:ICNTL(13),see ICNTL(13,15)
ICNTL(33) = 10 #Number of eigenvalues per subdomain.def:10.Ignored if ICNTL(21)!=10.
ICNTL(34) = 0 #convergence history of the iterative solver is written a file.def:0 regular output.1->file is named gmres cvg N.dat or cg cvg N.dat
ICNTL(35) = 0
ICNTL(36) = 2 #How to bind thread inside MAPHYS.0 :nobind.1:Thread to core bind.2:Grouped bind.def:0.eg:smph_examplethread.
ICNTL(37) = 1 #(P)2 level parlsm,pecifies the number of nodes.only useful if ICNTL(42) > 0.def:1
ICNTL(38) = 1 #(P)2 level parlsm,specifies no.of cores per node. It is only useful if ICNTL(42) > 0.def:1
ICNTL(39) = 4 #(P)2 level parlsm,specifies the number of threads(process) per domains.only useful if ICNTL(42) > 0.def:1
ICNTL(40) = 4 #(P)2 level parlsm,specifies the number of domains.It is only useful if ICNTL(42) > 0.def:1
ICNTL(41) = 0
ICNTL(42) = 1 #(imp)0->mpi only,1:multitheading-> level of parallelism def:0,1->multithreading.shld set ICNTL(37,38,39)
ICNTL(43) = 1 #input system (central.on the host, distributed, ...)def:1.eg:smph_examplerestart->3.paddle->2(experimental).
ICNTL(44) = 0 #When activated, it means user permutation you want MAPHYS to use.def:0.eg: xmph_exampledistkv in examples
ICNTL(45) = 0 #local output after analysis. If set to 1, it allows to perform a dump of the local matrices. def:0
ICNTL(46) = 0
ICNTL(47) = 20 #Controls the MUMPS instance.def:20.
ICNTL(48) = 10 #Controls FABULOUS Deflated Restart algorithm. de:20
ICNTL(49) = 1 #controls the choice of domain decp library/algorithm. Warning : Modifies the behavior of ICNTL(43).def:1: maphys.2:paddle
ICNTL(50) = 0 #When MUMPS error indicates that requires more memory workspace, def:0
RCNTL(1) = 0.000E+00
RCNTL(2) = 0.000E+00 #is the target for FABULOUS Deflated Restart algorithm.def:0.0
RCNTL(3) = 0.000E+00 # sets the value of α for custom stopping criteria of GMRes and CG(ICNTL(28) = 2).def:0
RCNTL(4) = 0.000E+00 #sets the value of β for custom stopping criteria of GMRes and CG (ICNTL(28) = 2). def:0
RCNTL(5) = 2.000E+00 #mumps:gives the amount by which the extra workspace(initially given by ICNTL(47)) will be multiplied for the next try. def:2.0
RCNTL(6) = 0.000E+00
RCNTL(7) = 0.000E+00
RCNTL(8) = 0.000E+00 #thrsld used to sparsify the LU factors while using PILUT.def:0.0.imp if ICNTL(30)->2
RCNTL(9) = 0.000E+00 #thrsld used to sparsify the schur compl. while computing it with PILUT.def:0.0.imp if ICNTL(30)->2
RCNTL(10) = 0.000E+0
RCNTL(11) = 1.0e-6 # Preconditioner - local SPARSE - Sparsifying tolerance(imp. if ICNTL(21)->2)
RCNTL(12) = 1.000E-02
RCNTL(13) = 0.000E+00
RCNTL(14) = 0.000E+00
RCNTL(15) = 2.000E-01 #Specifies the imbalance tolerance used in Scotch partitionner to create the subdomains. def:0.2
RCNTL(16) = 0.000E+00
RCNTL(17) = 0.000E+00
RCNTL(18) = 0.000E+00
RCNTL(19) = 0.000E+00
RCNTL(20) = 0.000E+00
RCNTL(21) = 1e-12 #(P) Iterative Solver - Convergence criteria
~
but I got this error:
mpirun -n 4 ./dmph_examplethreadkv thread.in
The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
This is disallowed by the MPI standard.
Your MPI job will now abort.
The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
This is disallowed by the MPI standard.
Your MPI job will now abort.
[sariyer:10096] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
This is disallowed by the MPI standard.
Your MPI job will now abort.
[sariyer:10097] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
The MPI_Comm_f2c() function was called before MPI_INIT was invoked.
This is disallowed by the MPI standard.
Your MPI job will now abort.
[sariyer:10098] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
[sariyer:10095] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[30023,1],1]
Exit code: 1
any idea is highly appreciated