Commit c9ccc16c authored by Nathalie Furmento's avatar Nathalie Furmento

contents: clean up news, and remove references to gcc-plugin and cell architecture

parent 54fe032d
......@@ -43,7 +43,7 @@
<li><b>The application provides algorithms and constraints</b>
<ul>
<li>CPU/GPU implementations of tasks</li>
<li>A graph of tasks, using either the StarPU's high level <b>GCC plugin</b> pragmas, StarPU's rich <b>C/C++ API</b>, or <b>OpenMP pragmas</b>.</li>
<li>A graph of tasks, using either StarPU's rich <b>C/C++ API</b>, or <b>OpenMP pragmas</b>.</li>
</ul>
<br>
</li>
......@@ -103,56 +103,12 @@ October
and accelerator devices with other parallel runtime systems, ...
</p>
<p>
June
2019 <b>&raquo;&nbsp;</b><a href="http://gforge.inria.fr/frs/?group_id=1570"><b>The
release 1.3.2 of StarPU is now
available!</b></a> The 1.3 release brings among other
functionalities a MPI master-slave support, a tool to replay
execution through SimGrid, a HDF5 implementation of the
Out-of-core, a new implementation of StarPU-MPI on top of
NewMadeleine, implicit support for asynchronous partition
planning, a resource management module to share processor cores
and accelerator devices with other parallel runtime systems, ...
</p>
<p>
May 2019 <b>&raquo;&nbsp;</b><a href="http://gforge.inria.fr/frs/?group_id=1570"><b>The
v1.1.8 release of StarPU is now available!</b></a>. This release notably brings the concept of
scheduling contexts which allows to separate computation
resources. This is really intented to be the last release for the
branch 1.1.
</p>
<p>
April
2019 <b>&raquo;&nbsp;</b><a href="http://gforge.inria.fr/frs/?group_id=1570"><b>The
release 1.3.1 of StarPU is now
available!</b></a> The 1.3 release brings among other
functionalities a MPI master-slave support, a tool to replay
execution through SimGrid, a HDF5 implementation of the
Out-of-core, a new implementation of StarPU-MPI on top of
NewMadeleine, implicit support for asynchronous partition
planning, a resource management module to share processor cores
and accelerator devices with other parallel runtime systems, ...
</p>
<p>
March
2019 <b>&raquo;&nbsp;</b><a href="http://gforge.inria.fr/frs/?group_id=1570"><b>The
release 1.3.0 of StarPU is now
available!</b></a> The 1.3 release brings among other
functionalities a MPI master-slave support, a tool to replay
execution through SimGrid, a HDF5 implementation of the
Out-of-core, a new implementation of StarPU-MPI on top of
NewMadeleine, implicit support for asynchronous partition
planning, a resource management module to share processor cores
and accelerator devices with other parallel runtime systems, ...
</p>
<p>
February 2019 <b>&raquo;&nbsp;</b><a href="http://gforge.inria.fr/frs/?group_id=1570"><b>The
1.2.8 release of StarPU is now available!</b></a>.
The 1.2 release serie notably brings an out-of-core support, a MIC Xeon
Phi support, an OpenMP runtime support, and a new internal
communication system for MPI.
(The release 1.2.7 is broken and should not be used)
</p>
</div>
<div class="section emphasizebot" style="text-align: right; font-style: italic;">
......@@ -372,11 +328,9 @@ StarPU will <b>automatically evict</b> data from the main memory in advance, and
<h4>All in all</h4>
<p>
All that means that, with the help
of <a href="doc/html/cExtensions.html">StarPU's extensions to the C
language</a>, the following sequential source code of a tiled version of
the classical Cholesky factorization algorithm using BLAS is also valid
StarPU code, possibly running on all the CPUs and GPUs, and given a data
All that means that the following sequential source code of a tiled version of
the classical Cholesky factorization algorithm using BLAS is also (a
almost) valid StarPU code, possibly running on all the CPUs and GPUs, and given a data
distribution over MPI nodes, it is even a distributed version!
</p>
......@@ -397,7 +351,6 @@ for (k = 0; k < tiles; k++) {
<li>SMP/Multicore Processors (x86, PPC, ARM, ... all Debian architecture have been tested) </li>
<li>NVIDIA GPUs (e.g. heterogeneous multi-GPU), with pipelined and concurrent kernel execution support (new in v1.2) and GPU-GPU direct transfers (new in v1.1)</li>
<li>OpenCL devices</li>
<li>Cell Processors (experimental)</li>
<li>Intel SCC (experimental, new in v1.2)</li>
<li>Intel MIC / Xeon Phi (new in v1.2)</li>
</ul>
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment