Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • PaStiX PaStiX
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 7
    • Issues 7
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • Deployments
    • Deployments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Terraform modules
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • solverstack
  • PaStiXPaStiX
  • Issues
  • #34
Closed
Open
Issue created Oct 16, 2018 by Andrea Piacentini@andrea3.14

pastixFinalize (from Fortran) not threadsafe

This piece of code is working

               IF ( self%ll_set) THEN
                  DO ib_mat = 1, self%il_nbmat
                     CALL pastixFinalize (self%sla_mat(ib_mat)%pastix_data)
                  END DO
               END IF

!$omp parallel num_threads(NBTHDS_B) default(none), &
!$omp shared(self, NBTHDS_B), &
!$omp private(ib_thr, ib_mat, il_gmat), &
!$omp private(matrix, sys, il_info)
!$omp do
               DO ib_thr = 1, NBTHDS_B
                  DO ib_mat = 1, self%il_nblocmat(ib_thr)
                     il_gmat = self%ila_glomat(ib_mat,ib_thr)
                     matrix => self%sla_mat(il_gmat)
                     matrix%iparm(:) = self%iparm(:)
                     matrix%dparm(:) = self%dparm(:)

                     CALL pastixInit( matrix%pastix_data, 0, &
                        & matrix%iparm, matrix%dparm)

                     CALL spmConvert(SpmCSC, matrix%spm, il_info)
                     
                     CALL pastix_task_analyze( matrix%pastix_data, matrix%spm, il_info )
                     CALL pastix_task_numfact( matrix%pastix_data, matrix%spm, il_info )

                     sys => self%sla_sys(il_gmat)
                     sys%idmat = ib_mat
                     sys%nrhs   = 1

                  END DO
               END DO
!$omp end do
!$omp end parallel

while this one (notice that pastixFinalize is inside the OpenMP loop while beforehand it was in a separate previous singlethreaded loop) ends in error with a double corrupted

!$omp parallel num_threads(NBTHDS_B) default(none), &
!$omp shared(self, NBTHDS_B), &
!$omp private(ib_thr, ib_mat, il_gmat), &
!$omp private(matrix, sys, il_info)
!$omp do
               DO ib_thr = 1, NBTHDS_B
                  DO ib_mat = 1, self%il_nblocmat(ib_thr)
                     il_gmat = self%ila_glomat(ib_mat,ib_thr)
                     matrix => self%sla_mat(il_gmat)
                     matrix%iparm(:) = self%iparm(:)
                     matrix%dparm(:) = self%dparm(:)

                     IF ( self%ll_set ) CALL pastixFinalize ( matrix%pastix_data )
                     CALL pastixInit( matrix%pastix_data, 0, &
                        & matrix%iparm, matrix%dparm)

                     CALL spmConvert(SpmCSC, matrix%spm, il_info)
                     
                     CALL pastix_task_analyze( matrix%pastix_data, matrix%spm, il_info )
                     CALL pastix_task_numfact( matrix%pastix_data, matrix%spm, il_info )

                     sys => self%sla_sys(il_gmat)
                     sys%idmat = ib_mat
                     sys%nrhs   = 1

                  END DO
               END DO
!$omp end do
!$omp end parallel

Thanks

To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking