Commit 61adbf58 authored by THIBAULT Samuel's avatar THIBAULT Samuel
Browse files

update

git-svn-id: svn+ssh://scm.gforge.inria.fr/svn/starpu/website@16389 176f6dd6-97d6-42f4-bd05-d3db9ad07c7a
parent d00a4faf
...@@ -161,7 +161,7 @@ available on the compute resource</b>. Data are also kept on e.g. GPUs as long a ...@@ -161,7 +161,7 @@ available on the compute resource</b>. Data are also kept on e.g. GPUs as long a
they are needed for further tasks. When a device runs out of memory, StarPU uses they are needed for further tasks. When a device runs out of memory, StarPU uses
an LRU strategy to <b>evict unused data</b>. StarPU also takes care of <b>automatically an LRU strategy to <b>evict unused data</b>. StarPU also takes care of <b>automatically
prefetching</b> data, which thus permits to <b>overlap data transfers with computations</b> prefetching</b> data, which thus permits to <b>overlap data transfers with computations</b>
(including GPU-GPU direct transfers) to achieve the most of the architecture. (including <b>GPU-GPU direct transfers</b>) to achieve the most of the architecture.
</p> </p>
<h4>Dependencies</h4> <h4>Dependencies</h4>
...@@ -211,15 +211,15 @@ will automatically determine which MPI node should execute which task, and ...@@ -211,15 +211,15 @@ will automatically determine which MPI node should execute which task, and
have gotten excellent scaling on a 144-node cluster with GPUs, we have not yet have gotten excellent scaling on a 144-node cluster with GPUs, we have not yet
had the opportunity to test on a yet larger cluster. We have however measured had the opportunity to test on a yet larger cluster. We have however measured
that with naive task submission, it should scale to a thousand nodes, and with that with naive task submission, it should scale to a thousand nodes, and with
pruning-tuned task submission, it should scale to about a million nodes. pruning-tuned task submission, it should scale to about a <b>million nodes</b>.
</p> </p>
<h4>Out of core</h4> <h4>Out of core</h4>
<p> <p>
When memory is not big enough for the working set, one may have to resort to When memory is not big enough for the working set, one may have to resort to
using disks. StarPU makes this seamless thanks to its <a href="doc/html/OutOfCore.html">out of core support</a> (new in 1.2). using disks. StarPU makes this seamless thanks to its <a href="doc/html/OutOfCore.html">out of core support</a> (new in 1.2).
StarPU will automatically evict data from the main memory in advance, and StarPU will <b>automatically evict</b> data from the main memory in advance, and
prefetch back required data before it is needed for tasks. <b>prefetch back</b> required data before it is needed for tasks.
</p> </p>
<h4>Extensions to the C Language</h4> <h4>Extensions to the C Language</h4>
...@@ -343,6 +343,7 @@ architectures, here is a non-exhaustive list: ...@@ -343,6 +343,7 @@ architectures, here is a non-exhaustive list:
<li><a href="https://project.inria.fr/chameleon/">Chameleon</a>, dense linear algebra library</li> <li><a href="https://project.inria.fr/chameleon/">Chameleon</a>, dense linear algebra library</li>
<li><a href="http://www.ida.liu.se/~chrke/skepu/">SkePU</a>, a skeleton programming framework.</li> <li><a href="http://www.ida.liu.se/~chrke/skepu/">SkePU</a>, a skeleton programming framework.</li>
<li><a href="http://pastix.gforge.inria.fr/">PaStiX</a>, sparse linear algebra library, starting from version 5.2.1</li> <li><a href="http://pastix.gforge.inria.fr/">PaStiX</a>, sparse linear algebra library, starting from version 5.2.1</li>
<li><a href="http://scalfmm-public.gforge.inria.fr/doc/">ScalFMM</a>, N-body interaction simulation using the Fast Multipole Method. </li>
</ul> </ul>
<p> <p>
...@@ -578,12 +579,21 @@ Available <a href="http://hal.inria.fr/inria-00421333">here</a>. ...@@ -578,12 +579,21 @@ Available <a href="http://hal.inria.fr/inria-00421333">here</a>.
<h4>On the simulation support through SimGrid</h4> <h4>On the simulation support through SimGrid</h4>
<ol> <ol>
<li>
L. Stanisic, S. Thibault, A. Legrand, B. Videau, and J.-F. Méhaut.<br/>
<b>Faithful Performance Prediction of a Dynamic Task-Based Runtime System for Heterogeneous Multi-Core Architectures</b>
In <em>Concurrency and Computation: Practice and Experience</em>, May 2015<br/>
Available <a href="https://hal.inria.fr/hal-01147997">here</a>.
</li>
<li> <li>
L. Stanisic, S. Thibault, A. Legrand, B. Videau, and J.-F. Méhaut.<br/> L. Stanisic, S. Thibault, A. Legrand, B. Videau, and J.-F. Méhaut.<br/>
<b>Modeling and Simulation of a Dynamic Task-Based Runtime System for Heterogeneous Multi-Core Architectures</b> <b>Modeling and Simulation of a Dynamic Task-Based Runtime System for Heterogeneous Multi-Core Architectures</b>
In <em>Euro-par 2014 - 20th International Conference on Parallel Processing</em>, Porto, Portugal, August 2014.<br/> In <em>Euro-par 2014 - 20th International Conference on Parallel Processing</em>, Porto, Portugal, August 2014.<br/>
Available <a href="http://hal.inria.fr/hal-01011633">here</a>. Available <a href="http://hal.inria.fr/hal-01011633">here</a>.
</li> </li>
</ol> </ol>
<h4>On the Cell support</h4> <h4>On the Cell support</h4>
...@@ -601,6 +611,14 @@ Available <a href="http://hal.inria.fr/inria-00378705">here</a>. ...@@ -601,6 +611,14 @@ Available <a href="http://hal.inria.fr/inria-00378705">here</a>.
<h4>On Applications</h4> <h4>On Applications</h4>
<ol> <ol>
<li>
V. Martínez, M. David, F. Dupros, O. aumage, S. Thibault, H. Aochi, P. Navaux<br/>
<b>Towards seismic wave modeling on heterogeneous many-core architectures using
task-based runtime system</b>
<em>27th SBAC-PAD</em>, Florianopolis, Brazil, Oct. 2015<br/>
Available <a href="https://hal.inria.fr/hal-01182746">here</a>.
</li>
<li> <li>
S. Henry, A. Denis, D. Barthou, M.-C. Counilh, R. Namyst<br/> S. Henry, A. Denis, D. Barthou, M.-C. Counilh, R. Namyst<br/>
<b>Toward OpenCL Automatic Multi-Device Support</b> <b>Toward OpenCL Automatic Multi-Device Support</b>
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment