Commit 0869870c authored by Nathalie Furmento's avatar Nathalie Furmento
Browse files

tutorials/2015-06-PATC: updates

git-svn-id: svn+ssh://scm.gforge.inria.fr/svn/starpu/website@15566 176f6dd6-97d6-42f4-bd05-d3db9ad07c7a
parent 9068b96c
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -q mirage -l nodes=1:ppn=12
# go in the directory from which the submission was made
cd $PBS_O_WORKDIR
make gemm/sgemm
STARPU_WORKER_STATS=1 ./gemm/sgemm
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12 -q formation_gpu
# go in the directory from which the submission was made
cd $PBS_O_WORKDIR
......
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -q mirage -l nodes=1:ppn=12
# go in the directory from which the submission was made
cd $PBS_O_WORKDIR
make mult
STARPU_WORKER_STATS=1 ./mult
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12 -q formation_gpu
# go in the directory from which the submission was made
cd $PBS_O_WORKDIR
......
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -q mirage -l nodes=1:ppn=12
starpu_machine_display
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12 -q formation_gpu
starpu_machine_display
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -q mirage -l nodes=1:ppn=12
# go in the directory from which the submission was made
cd $PBS_O_WORKDIR
make vector_scal_task_insert
./vector_scal_task_insert
# to force the implementation on a GPU device, by default, it will enable CUDA
# STARPU_NCPUS=0 ./vector_scal_task_insert
# to force the implementation on a OpenCL device
# STARPU_NCPUS=0 STARPU_NCUDA=0 ./vector_scal_task_insert
......@@ -51,20 +51,20 @@ platform.
</p>
<P>
Once you are connected, we advise you to add the following lines at
the end of your <tt>.bashrc</tt> file.
the end of your <tt>.bash_profile</tt> file.
</p>
<tt><pre>
module purge
module load compiler/intel
module load hardware/hwloc
module load gpu/cuda/5.5
module load compiler/cuda
module load mpi/intel
module load trace/fxt/0.2.13
module load runtime/starpu/1.1.2
</pre></tt>
module load runtime/starpu/1.1.4
<p>
<!--
<b>Important:</b>
Due to an issue with the NFS-mounted home, you need to redirect CUDA's cache to
......@@ -76,6 +76,7 @@ mkdir -p /tmp/$USER-nv
ln -s /tmp/$USER-nv ~/.nv
</pre></tt>
</p>
-->
</div>
......@@ -89,20 +90,27 @@ participants. Here is
a <a href="files/starpu_machine_display.pbs">script</a> to submit your
first StarPU application. It calls the
tool <tt>starpu_machine_display</tt> which shows the processing units
that StarPU can use, and the bandwitdh and affinity measured between
that StarPU can use, and the bandwidth and affinity measured between
the memory nodes.
</p>
<p>
PlaFRIM/DiHPES nodes are normally accessed through queues. For our lab
works, no queue needs to be specified, however if you have an account
on the platform and you want to use the same pool of machines, you
will need to specify you want to use the <tt>mirage</tt> queue.
PlaFRIM/DiHPES nodes are accessed through queues which represent
machines with similar characteristics. For our lab works, we have 2
sets of machines:
<ul>
<li> GPU nodes accessed with the queue <tt>formation_gpu</tt>.
</li>
<li> Non-GPU nodes accessed with the queue <tt>formation</tt>.
</li>
</ul>
For the rest of the tutorial, we will use the queue <tt>formation_gpu</tt>.
</p>
<tt><pre>
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12 -q formation_gpu
starpu_machine_display
</pre></tt>
......@@ -143,7 +151,7 @@ $ export STARPU_HOSTNAME=mirage
</pre></tt>
<p>
Also add this do your <tt>.bashrc</tt> for further connections. Of course, on
Also add this do your <tt>.bash_profile</tt> for further connections. Of course, on
a heterogeneous cluster, the cluster launcher script should set various
hostnames for the different node classes, as appropriate.
</p>
......@@ -202,7 +210,7 @@ given factor.
<tt><pre>
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12 -q formation_gpu
# go in the directory from which the submission was made
cd $PBS_O_WORKDIR
......@@ -340,7 +348,7 @@ Run the application with the <a href="files/mult.pbs">batch scheduler</a>, enabl
<tt><pre>
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12 -q formation_gpu
# go in the directory from which the submission was made
cd $PBS_O_WORKDIR
......@@ -372,7 +380,7 @@ Let's execute it.
<tt><pre>
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12 -q formation_gpu
# go in the directory from which the submission was made
cd $PBS_O_WORKDIR
......@@ -612,7 +620,7 @@ complete.
<tt><pre>
#how many nodes and cores
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12
#PBS -W x=NACCESSPOLICY:SINGLEJOB -l nodes=1:ppn=12 -q formation_gpu
# go in the directory from which the submission was made
cd $PBS_O_WORKDIR
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment