Mentions légales du service

Skip to content
Snippets Groups Projects
  1. Nov 23, 2016
  2. Nov 15, 2016
  3. Jul 15, 2016
  4. Jul 14, 2016
  5. Jul 13, 2016
  6. Jul 07, 2016
  7. Jul 06, 2016
  8. Jul 05, 2016
  9. Jul 01, 2016
  10. Jun 27, 2016
    • Kelvin Li's avatar
      [OpenMP] Diagnose missing cases of statements between target and teams directives · 7b4330af
      Kelvin Li authored
      Clang fails to diagnose cases such as
      #pragma omp target
        while(0) {
          #pragma omp teams
          {}
        }
      
      A patch by David Sheinkman.
      
      
      
      git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@273908 91177308-0d34-0410-b5e6-96231b3b80d8
      7b4330af
    • Carlo Bertolli's avatar
      Resubmission of http://reviews.llvm.org/D21564 after fixes. · 012ef212
      Carlo Bertolli authored
      [OpenMP] Initial implementation of parse and sema for composite pragma 'distribute parallel for'
      
      This patch is an initial implementation for #distribute parallel for.
      The main differences that affect other pragmas are:
      
      The implementation of 'distribute parallel for' requires blocking of the associated loop, where blocks are "distributed" to different teams and iterations within each block are scheduled to parallel threads within each team. To implement blocking, sema creates two additional worksharing directive fields that are used to pass the team assigned block lower and upper bounds through the outlined function resulting from 'parallel'. In this way, scheduling for 'for' to threads can use those bounds.
      As a consequence of blocking, the stride of 'distribute' is not 1 but it is equal to the blocking size. This is returned by the runtime and sema prepares a DistIncrExpr variable to hold that value.
      As a consequence of blocking, the global upper bound (EnsureUpperBound) expression of the 'for' is not the original loop upper bound (e.g. in for(i = 0 ; i < N; i++) this is 'N') but it is the team-assigned block upper bound. Sema creates a new expression holding the calculation of the actual upper bound for 'for' as UB = min(UB, PrevUB), where UB is the loop upper bound, and PrevUB is the team-assigned block upper bound.
      
      
      
      
      git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@273884 91177308-0d34-0410-b5e6-96231b3b80d8
      012ef212
  11. Jun 24, 2016
    • Carlo Bertolli's avatar
      Revert r273705 · 912e4df3
      Carlo Bertolli authored
       [OpenMP] Initial implementation of parse and sema for composite pragma 'distribute parallel for'
      
      
      
      
      git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@273709 91177308-0d34-0410-b5e6-96231b3b80d8
      912e4df3
    • Carlo Bertolli's avatar
      [OpenMP] Initial implementation of parse and sema for composite pragma 'distribute parallel for' · 396e7147
      Carlo Bertolli authored
      http://reviews.llvm.org/D21564
      
      This patch is an initial implementation for #distribute parallel for.
      The main differences that affect other pragmas are:
      
      The implementation of 'distribute parallel for' requires blocking of the associated loop, where blocks are "distributed" to different teams and iterations within each block are scheduled to parallel threads within each team. To implement blocking, sema creates two additional worksharing directive fields that are used to pass the team assigned block lower and upper bounds through the outlined function resulting from 'parallel'. In this way, scheduling for 'for' to threads can use those bounds.
      As a consequence of blocking, the stride of 'distribute' is not 1 but it is equal to the blocking size. This is returned by the runtime and sema prepares a DistIncrExpr variable to hold that value.
      As a consequence of blocking, the global upper bound (EnsureUpperBound) expression of the 'for' is not the original loop upper bound (e.g. in for(i = 0 ; i < N; i++) this is 'N') but it is the team-assigned block upper bound. Sema creates a new expression holding the calculation of the actual upper bound for 'for' as UB = min(UB, PrevUB), where UB is the loop upper bound, and PrevUB is the team-assigned block upper bound.
      
      
      
      git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@273705 91177308-0d34-0410-b5e6-96231b3b80d8
      396e7147
  12. Jun 23, 2016
  13. Jun 21, 2016
  14. Jun 15, 2016
  15. Jun 09, 2016
  16. May 27, 2016
  17. May 26, 2016
  18. May 25, 2016
  19. May 17, 2016
  20. May 09, 2016
  21. Apr 29, 2016
  22. Apr 28, 2016
  23. Apr 26, 2016
    • Samuel Antao's avatar
      [OpenMP] Improve mappable expressions Sema. · e53ba6d4
      Samuel Antao authored
      Summary:
      This patch adds logic to save the components of mappable expressions in the clause that uses it, so that they don't have to be recomputed during codegen. Given that the mappable components are (will be) used in several clauses a new geneneric implementation `OMPMappableExprListClause` is used that extends the existing `OMPVarListClause`.
      
      This patch does not add new tests. The goal is to preserve the existing functionality while storing more info in the clauses.
      
      Reviewers: hfinkel, carlo.bertolli, arpith-jacob, kkwli0, ABataev
      
      Subscribers: cfe-commits, caomhin
      
      Differential Revision: http://reviews.llvm.org/D19382
      
      git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@267560 91177308-0d34-0410-b5e6-96231b3b80d8
      e53ba6d4
  24. Apr 25, 2016
    • Alexey Bataev's avatar
      [OPENMP 4.5] Codegen for 'taskloop' directive. · 8ee6f73c
      Alexey Bataev authored
      The taskloop construct specifies that the iterations of one or more associated loops will be executed in parallel using OpenMP tasks. The iterations are distributed across tasks created by the construct and scheduled to be executed.
      The next code will be generated for the taskloop directive:
          #pragma omp taskloop num_tasks(N) lastprivate(j)
              for( i=0; i<N*GRAIN*STRIDE-1; i+=STRIDE ) {
                int th = omp_get_thread_num();
                #pragma omp atomic
                  counter++;
                #pragma omp atomic
                  th_counter[th]++;
                j = i;
          }
      
      Generated code:
      task = __kmpc_omp_task_alloc(NULL,gtid,1,sizeof(struct
      task),sizeof(struct shar),&task_entry);
      psh = task->shareds;
      psh->pth_counter = &th_counter;
      psh->pcounter = &counter;
      psh->pj = &j;
      task->lb = 0;
      task->ub = N*GRAIN*STRIDE-2;
      task->st = STRIDE;
      __kmpc_taskloop(
      NULL,             // location
      gtid,             // gtid
      task,             // task structure
      1,                // if clause value
      &task->lb,        // lower bound
      &task->ub,        // upper bound
      STRIDE,           // loop increment
      0,                // 1 if nogroup specified
      2,                // schedule type: 0-none, 1-grainsize, 2-num_tasks
      N,                // schedule value (ignored for type 0)
      (void*)&__task_dup_entry // tasks duplication routine
      );
      
      git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@267395 91177308-0d34-0410-b5e6-96231b3b80d8
      8ee6f73c
  25. Apr 22, 2016
  26. Apr 20, 2016
Loading