Commit 8b9a6b1a authored by THIBAULT Samuel's avatar THIBAULT Samuel

comment on scale-in and scale-out

git-svn-id: svn+ssh://scm.gforge.inria.fr/svn/starpu/website@15672 176f6dd6-97d6-42f4-bd05-d3db9ad07c7a
parent ca3d22cd
......@@ -181,7 +181,10 @@ permits to <b>automatically let processing units execute the tasks they are the
Various strategies and variants are available: dmda (a data-aware MCT strategy,
thus similar to heft but starts executing tasks before the whole task graph is
submitted, thus allowing dynamic task submission), eager, locality-aware
work-stealing, ...
work-stealing, ... The overhead per task is typically around the order of
magnitude of a microsecond. Tasks should thus be a few orders of magnitude
bigger, such as 100 microseconds or 1 millisecond, to make the overhead
negligible.
</p>
<h4>Clusters</h4>
......@@ -191,7 +194,11 @@ explicit network communications, which will then be <b>automatically combined an
overlapped</b> with the intra-node data transfers and computation. The application
can also just provide the whole task graph, a data distribution over MPI nodes, and StarPU
will automatically determine which MPI node should execute which task, and
<b>generate all required MPI communications</b> accordingly (new in v0.9).
<b>generate all required MPI communications</b> accordingly (new in v0.9). We
have gotten excellent scaling on a 144-node cluster with GPUs, we have not yet
had the opportunity to test on a yet larger cluster. We have however measured
that with naive task submission, it should scale to a thousand nodes, and with
pruning-tuned task submission, it should scale to about a million nodes.
</p>
<h4>Out of core</h4>
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment