Commit 03b16e0c authored by AUMAGE Olivier's avatar AUMAGE Olivier
Browse files

maj repertoires

parent fd09de01
#!/bin/bash
# @ class = clgpu
# @ job_name = job_gemm
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
make gemm/sgemm
STARPU_WORKER_STATS=1 ./gemm/sgemm
#!/bin/bash
# @ class = clgpu
# @ job_name = job_ring
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
make ring_async_implicit
mpirun -np 2 $PWD/ring_async_implicit
#!/bin/bash
# @ class = clgpu
# @ job_name = job_stencil
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
make stencil5
mpirun -np 2 $PWD/stencil5 -display
......
#!/bin/bash
# @ class = clgpu
# @ job_name = job_mult
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
make mult
STARPU_WORKER_STATS=1 ./mult
......
#!/bin/bash
# @ class = clgpu
# @ job_name = job_vector_scal
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
make vector_scal_task_insert
STARPU_WORKER_STATS=1 ./vector_scal_task_insert
......
......@@ -92,114 +92,57 @@ export LIBRARY_PATH=$LD_LIBRARY_PATH
<p>
You can either add the previous lines to your
file <tt>$HOME/.bash_profile</tt>, or use the script
file <tt>/home/compas18-16/tp_compas18_vars.sh</tt>
file <tt>/mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh</tt>
</p>
</div>
<!--
<div class="section">
<h3>Job Submission</h3>
<p>
Jobs can be submitted to the platform to reserve a set of nodes and to
execute a application on those nodes.
Here is a script to submit your
first StarPU application. It calls the
tool <tt>starpu_machine_display</tt> which shows the processing units
that StarPU can use, and the bandwidth and affinity measured between
the memory nodes.
</p>
<p>
MdS nodes are accessed through queues which represent
machines with similar characteristics. For our lab works, we have 2
sets of machines:
<ul>
<li> GPU nodes accessed with the queue <tt>clgpu</tt>. </li>
<li> Non-GPU nodes accessed with the queue <tt>clallmds</tt>. </li>
</ul>
</p>
<h3>Testing the installation</h3>
<tt>
<pre>
#!/bin/bash
# @ class = clgpu
# @ job_name = job_starpu_machine_display
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
starpu_machine_display
</pre>
</tt>
<P>
You will find a copy of the script in <tt>/gpfslocal/pub/training/runtime_june2016/starpu_machine_display.sh</tt>.
To submit the script, simply call:
You will find a copy of the script in <tt>/mnt/n7fs/ens/tp_abuttari/TP_StarPU/starpu_machine_display.sh</tt>.
To execute the script, simply call:
</p>
<tt>
<pre>
llsubmit starpu_machine_display.sh
starpu_machine_display.sh
</pre>
</tt>
<p>
The state of the job can be queried by calling the command <tt>llq | grep $USER</tt>.
Once finished, the standard output and the standard error generated by
the script execution are available in the files:
<ul>
<li>${HOME}/starpu/<b>jobname</b>.<b>sequence_number</b>.out</li>
<li>${HOME}/starpu/<b>jobname</b>.<b>sequence_number</b>.err</li>
</ul>
</p>
<p>
Note that the first time <tt>starpu_machine_display</tt> is executed,
it calibrates the performance model of the bus, the results are then
stored in different files in the
directory <tt>$HOME/.starpu/sampling/bus</tt>. If you run the command
several times, you will notice that StarPU may calibrate the bus speed
several times. This is because the cluster's batch scheduler may assign a
different node each time, and StarPU does not know that the local
cluster we use is homogeneous, and thus assumes that all nodes of the
cluster may be different. The following line could be added to the
script file to force StarPU to use the same machine ID
for the whole cluster:
directory <tt>$HOME/.starpu/sampling/bus</tt>.
</p>
<tt>
<pre>
$ export STARPU_HOSTNAME=poincaregpu
</pre>
</tt>
<p>
Of course, on a heterogeneous cluster, the cluster launcher script
should set various hostnames for the different node classes, as
appropriate.
</p>
</div>
-->
<!--
<div class="section">
<h3>Tutorial Material</h3>
<p>
All files needed for the lab works are available on the machine in the
directory <tt>/gpfslocal/pub/training/runtime_june2016</tt>.
directory <tt>/mnt/n7fs/ens/tp_abuttari/TP_StarPU</tt>.
</p>
</div>
</div>
-->
<div class="section">
<h2>Session Part 1: Task-based Programming Model</h2>
......@@ -236,26 +179,15 @@ Here are the source files for the application:
</ul>
Run <tt>make</tt>, and run the
resulting <tt>vector_scal_task_insert</tt> executable using the batch
scheduler using the <a href="files/vector_scal.sh">given script
resulting <tt>vector_scal_task_insert</tt> executable
using the <a href="files/vector_scal.sh">given script
vector_scal.sh</a>. It should be working: it simply scales a given
vector by a given factor.
</p>
<tt>
<pre>
#!/bin/bash
# @ class = clgpu
# @ job_name = job_vector_scal
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
make vector_scal_task_insert
......@@ -391,7 +323,7 @@ whole C result matrix.
</p>
<p>
Run the application with the <a href="files/mult.sh">batch scheduler</a>, enabling some statistics:
Run the application with the script <a href="files/mult.sh">mult.sh</a>, enabling some statistics:
</p>
<tt>
......@@ -423,17 +355,7 @@ Let's execute it.
<tt>
<pre>
#!/bin/bash
# @ class = clgpu
# @ job_name = job_gemm
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
make gemm/sgemm
STARPU_WORKER_STATS=1 ./gemm/sgemm
......@@ -678,17 +600,7 @@ complete.
<tt>
<pre>
#!/bin/bash
# @ class = clgpu
# @ job_name = job_ring
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
make ring_async_implicit
mpirun -np 2 $PWD/ring_async_implicit
......@@ -711,17 +623,7 @@ new distribution.
<tt>
<pre>
#!/bin/bash
# @ class = clgpu
# @ job_name = job_stencil
# @ total_tasks = 10
# @ node = 1
# @ wall_clock_limit = 00:10:00
# @ output = $(HOME)/starpu/$(job_name).$(jobid).out
# @ error = $(HOME)/starpu/$(job_name).$(jobid).err
# @ job_type = mpich
# @ queue
source /gpfslocal/pub/training/runtime_june2016/starpu_env.sh
source /mnt/n7fs/ens/tp_abuttari/TP_StarPU/tp_vars.sh
make stencil5
mpirun -np 2 $PWD/stencil5 -display
......@@ -730,7 +632,7 @@ mpirun -np 2 $PWD/stencil5 -display
</div>
<div class="section">
<!--div class="section">
<h2>Session Part 4: OpenMP Support</h2>
<div class="section">
......@@ -760,7 +662,7 @@ Homepage of the Klang-Omp OpenMP compiler: <a href="http://kstar.gforge.inria.fr
</p>
</div>
</div>
</div-->
<div class="section" id="contact">
......@@ -859,38 +761,18 @@ units over time.
<div class="section" id="other">
<h2>Other Materials: Talk Slides and Website Links</h2>
<p>
<h3>General Session Introduction</h3>
<ul>
<li> <a href="slides/00_intro_runtimes.pdf">Slides: Introduction to Runtime Systems</a>
</li>
</ul>
<h3>The Hardware Locality Library (hwloc)</h3>
<ul>
<li> <a href="http://www.open-mpi.org/projects/hwloc/tutorials/">Tutorial:
hwloc</a>. For questions regarding hwloc, please
contact <a href="mailto:brice.goglin@inria.fr">brice.goglin@inria.fr</a>.
</li>
</ul>
<h3>The StarPU Runtime System</h3>
<ul>
<Li> <a href="slides/01_introducing_starpu.pdf">Slides: StarPU - Part. 1 – Introducing StarPU</a></li>
<Li> <a href="slides/02_mastering_starpu.pdf">Slides: StarPU - Part. 2 – Mastering StarPU</a></li>
</ul>
<h3>The EZtrace Performance Debugging Framework</h3>
<ul>
<li> <a href="http://eztrace.gforge.inria.fr/tutorials/index.html">Tutorial:
EzTrace</a>. For questions regarding EzTrace, please
contact <a href="mailto:eztrace-devel@lists.gforge.inria.fr">eztrace-devel@lists.gforge.inria.fr</a>.
</li>
</ul>
</p>
</div>
<div class="section bot">
<p class="updated">
Last updated on 2016/06/17.
Last updated on 2018/07/03.
</p>
</body>
</html>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment