Commit d9604e26 authored by Nathalie Furmento's avatar Nathalie Furmento
Browse files

website: tutorials/2014-05-PATC

git-svn-id: svn+ssh://scm.gforge.inria.fr/svn/starpu/website@12910 176f6dd6-97d6-42f4-bd05-d3db9ad07c7a
parent 633b9201
......@@ -471,31 +471,33 @@ performance model.
<tt><pre>
$ starpu_perfmodel_display -l
file: &lt;starpu_sgemm_gemm.erik&gt;
file: &lt;starpu_sgemm_gemm.mirage&gt;
$ starpu_perfmodel_display -s starpu_sgemm_gemm
performance model for cpu
# hash size mean dev n
8bd4e11d 2359296 9.318547e+04 4.335047e+02 700
performance model for cuda_0
# hash size mean dev n
8bd4e11d 2359296 3.396056e+02 3.391979e+00 900
performance model for cpu_impl_0
# hash size flops mean (us) stddev (us) n
8bd4e11d 2359296 0.000000e+00 1.848856e+04 4.026761e+04 12
performance model for cuda_0_impl_0
# hash size flops mean (us) stddev (us) n
8bd4e11d 2359296 0.000000e+00 4.918095e+02 9.404866e+00 66
...
</pre></tt>
<p>This shows that for the sgemm kernel with a 2.5M matrix slice, the average
execution time on CPUs was about 93ms, with a 0.4ms standard deviation, over
700 samples, while it took about 0.033ms on GPUs, with a 0.004ms standard
<p>
This shows that for the sgemm kernel with a 2.5M matrix slice, the average
execution time on CPUs was about 18ms, with a 0.4ms standard deviation, over
12 samples, while it took about 0.049ms on GPUs, with a 0.009ms standard
deviation. It is a good idea to check this before doing actual performance
measurements. If the kernel has varying performance, it may be a good idea to
force StarPU to continue calibrating the performance model, by using <tt>export
STARPU_CALIBRATE=1</tt>
</p>
<p>If the code of a computation kernel is modified, the performance changes, the
<p>
If the code of a computation kernel is modified, the performance changes, the
performance model thus has to be recalibrated from start. To do so, use
<tt>export STARPU_CALIBRATE=2</tt>
</p>
</div>
-->
</div>
<div class="section">
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment