Mentions légales du service

Skip to content
Snippets Groups Projects
Commit faeff07e authored by BRAMAS Berenger's avatar BRAMAS Berenger
Browse files
parents 4988434c bc7df4db
No related branches found
No related tags found
No related merge requests found
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
%% Package config %% Package config
\lstset{language=c++, frame=lines} \lstset{language=c++, frame=lines}
\restylealgo{boxed} \RestyleAlgo{boxed}
\geometry{scale=0.8, nohead} \geometry{scale=0.8, nohead}
\hypersetup{ colorlinks = true, linkcolor = black, urlcolor = blue, citecolor = blue } \hypersetup{ colorlinks = true, linkcolor = black, urlcolor = blue, citecolor = blue }
%% Remove introduction numbering %% Remove introduction numbering
...@@ -28,40 +28,51 @@ ...@@ -28,40 +28,51 @@
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction} \section{Introduction}
In this document we introduce the principles and the algorithms used in our library to run in a distributed environment using MPI. In this document we introduce the principles and the algorithms used
The algorithms in this document may not be up to date comparing to those used in the code. in our library to run in a distributed environment using MPI. The
We advise to check the version of this document and the code to have the latest available. algorithms in this document may not be up to date comparing to those
used in the code. We advise to check the version of this document and
the code to have the latest available.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Building the tree in Parallel} \chapter{Building the tree in Parallel}
\section{Description} \section{Description}
The main motivation to create a distributed version of the FMM is to run large simulations. The main motivation to create a distributed version of the FMM is to
These ones contain more particles than a computer can host which involves using several computers. run large simulations. These ones contain more particles than a
Moreover, it is not reasonable to ask a master process to load an entire file and to dispatch the data to others processes. Without being able to know the entire tree it may send randomly the data to the slaves. computer can host which involves using several computers. Moreover,
To override this situation, our solution can be viewed as a two steps process. it is not reasonable to ask a master process to load an entire file
First, each node loads a part of the file to possess several particles. and to dispatch the data to others processes. Without being able to
After this task, each node can compute the Morton index for the particles he had loaded. know the entire tree it may send randomly the data to the slaves. To
The Morton index of a particle depends of the system properties but also of the tree height. override this situation, our solution can be viewed as a two steps
If we want to choose the tree height and the number of nodes at run time then we cannot pre-process the file. process. First, each node loads a part of the file to possess several
The second step is a parallel sort based on the Morton index between all nodes with a balancing operation at the end. particles. After this task, each node can compute the Morton index
for the particles he had loaded. The Morton index of a particle
depends of the system properties but also of the tree height. If we
want to choose the tree height and the number of nodes at run time
then we cannot pre-process the file. The second step is a parallel
sort based on the Morton index between all nodes with a balancing
operation at the end.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Load a file in parallel} \section{Load a file in parallel}
We use the MPI $I/O$ functions to split a file between all the mpi processes. We use the MPI $I/O$ functions to split a file between all the mpi
The prerequisite to make the splitting easier is to have a binary file. processes. The prerequisite to make the splitting easier is to have a
Thereby, using a very basic formula each node knows which part of the file it needs to load. binary file. Thereby, using a very basic formula each node knows
which part of the file it needs to load.
\begin{equation} \begin{equation}
size per proc \leftarrow \left (file size - header size \right ) / nbprocs size per proc \leftarrow \left (file size - header size \right ) / nbprocs
\end{equation} \end{equation}
\begin{equation} \begin{equation}
offset \leftarrow header size + size per proc .\left ( rank - 1 \right ) offset \leftarrow header size + size per proc .\left ( rank - 1 \right )
\end{equation} \end{equation}
\newline \newline
We do not use the view system to read that data as it is used to write. The MPI\_File\_read is called as described in the fallowing $C++$ code. We do not use the view system to read that data as it is used to
write. The MPI\_File\_read is called as described in the fallowing
$C++$ code.
\begin{lstlisting} \begin{lstlisting}
// From FMpiFmaLoader // From FMpiFmaLoader
MPI_File_read_at(file, headDataOffSet + startPart * 4 * sizeof(FReal), MPI_File_read_at(file, headDataOffSet + startPart * 4 * sizeof(FReal),
particles, int(bufsize), MPI_FLOAT, &status); particles, int(bufsize), MPI_FLOAT, &status);
\end{lstlisting} \end{lstlisting}
Our files are composed by a header fallowing by all the particles. Our files are composed by a header fallowing by all the particles.
The header enables to check several properties as the precision of the file. The header enables to check several properties as the precision of the file.
...@@ -72,37 +83,47 @@ Finally, a particle is represented by four decimal values: a position and a phys ...@@ -72,37 +83,47 @@ Finally, a particle is represented by four decimal values: a position and a phys
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Sorting the particles} \section{Sorting the particles}
Once each node has a set of particles we need to sort them. Once each node has a set of particles we need to sort them. This
This problem boils down to a simple parallel sort where Morton index are used to compare particles. problem boils down to a simple parallel sort where Morton index are
We use two different approaches to sort the data. used to compare particles. We use two different approaches to sort
In the next version of scalfmm the less efficient method should be deleted. the data. In the next version of scalfmm the less efficient method
should be deleted.
\subsection{Using QuickSort} \subsection{Using QuickSort}
A first approach is to use a famous sorting algorithm. A first approach is to use a famous sorting algorithm. We choose to
We choose to use the quick sort algorithm because the distributed and the shared memory approaches are mostly similar. use the quick sort algorithm because the distributed and the shared
Our implementation is based on the algorithm described in \cite{itpc03}. memory approaches are mostly similar. Our implementation is based on
The efficiency of this algorithm depends roughly of the pivot choice. the algorithm described in \cite{itpc03}. The efficiency of this
In fact, a wrong idea of the parallel quick sort is to think that each process first sort their particles using quick sort and then use a merge sort to share their results. algorithm depends roughly of the pivot choice. In fact, a wrong idea
Instead, the nodes choose a common pivot and progress for one quick sort iteration together. of the parallel quick sort is to think that each process first sort
From that point all process has an array with a left part where all values are lower than the pivot and a right part where all values are upper or equal than the pivot. their particles using quick sort and then use a merge sort to share
Then, the nodes exchange data and some of them will work on the lower part and the other on the upper parts until there is one process for a part. their results. Instead, the nodes choose a common pivot and progress
At this point, the process performs a shared memory quick sort. for one quick sort iteration together. From that point all process
To choose the pivot we tried to use an average of all the data hosted by the nodes: has an array with a left part where all values are lower than the
pivot and a right part where all values are upper or equal than the
pivot. Then, the nodes exchange data and some of them will work on
the lower part and the other on the upper parts until there is one
process for a part. At this point, the process performs a shared
memory quick sort. To choose the pivot we tried to use an average of
all the data hosted by the nodes:
\newline \newline
\begin{algorithm}[H] \begin{algorithm}[H]
\linesnumbered \LinesNumbered
\SetLine \SetAlgoLined
\KwResult{A Morton index as next iteration pivot} \KwResult{A Morton index as next iteration pivot}
\BlankLine \BlankLine
myFirstIndex $\leftarrow$ particles$[0]$.index\; myFirstIndex $\leftarrow$ particles$[0]$.index\;
allFirstIndexes = MortonIndex$[nbprocs]$\; allFirstIndexes = MortonIndex$[nbprocs]$\;
allGather(myFirstIndex, allFirstIndexes)\; allGather(myFirstIndex, allFirstIndexes)\;
pivot $\leftarrow$ Sum(allFirstIndexes(:) / nbprocs)\; pivot $\leftarrow$ Sum(allFirstIndexes(:) / nbprocs)\;
\BlankLine \BlankLine
\caption{Choosing the QS pivot} \caption{Choosing the QS pivot}
\end{algorithm} \end{algorithm}
\newline
A bug was made when at the beginning, we did an average by summing all the values first and dividing after. But the Morton index may be extremly high, so we need to to divide all the value before performing the sum. A bug was made when at the beginning, we did an average by summing all
the values first and dividing after. But the Morton index may be
extremly high, so we need to to divide all the value before performing
the sum.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
...@@ -143,122 +164,194 @@ It is a simple reordoring of the data, but the data has to stayed sorted. ...@@ -143,122 +164,194 @@ It is a simple reordoring of the data, but the data has to stayed sorted.
At the end of the algorithm our system is completely balanced with the same number of leaves on each process. At the end of the algorithm our system is completely balanced with the same number of leaves on each process.
\begin{figure}[h!] \begin{figure}[h!]
\begin{center} \begin{center}
\includegraphics[width=15cm, height=15cm, keepaspectratio=true]{Balance.png} \includegraphics[width=15cm, height=15cm, keepaspectratio=true]{Balance.png}
\caption{Balancing Example} \caption{Balancing Example}
\end{center} \end{center}
\end{figure} \end{figure}
A process has to send data to the left if its current left limit is upper than its objective limit. A process has to send data to the left if its current left limit is upper than its objective limit.
Same in the other side, and we can reverse the calculs to know if a process has to received data. Same in the other side, and we can reverse the calculs to know if a process has to received data.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Simple operators: P2M, M2M, L2L} \chapter{Simple operators: P2M, M2M, L2L}
We present the different FMM operators in two separated parts depending on their parallel complexity. We present the different FMM operators in two separated parts
In this first part, we present the three simplest operators P2M, M2M and L2L. depending on their parallel complexity. In this first part, we
Their simplicity is explained by the possible prediction to know which node hosts a cell and how to organize the communication. present the three simplest operators P2M, M2M and L2L. Their
simplicity is explained by the possible prediction to know which node
hosts a cell and how to organize the communication.
We will first present how the different processus can know which cell
or leaf belongs to which processus.
\section{Morton Index Intervals}
A Morton Index Interval is a simple structure with two Morton indexes
inside, referencing the first a last leaf of each processus. Each
processus compute its Morton Index Interval at first by scanning all
its leafs.
Once each processus compute its interval, there is a global
communication for the processus to know the interval of the others,
and the result is stored in an array of interval structures.
\section{P2M} \section{P2M}
The P2M still unchanged from the sequential approach to the distributed memory algorithm. The P2M still unchanged from the sequential approach to the
In fact, in the sequential model we compute a P2M between all particles of a leaf and this leaf which is also a cell. distributed memory algorithm. In fact, in the sequential model we
Although, a leaf and the particles it hosts belong to only one node so doing the P2M operator do not require any information from another node. compute a P2M between all particles of a leaf and this leaf which is
From that point, using the shared memory operator makes sense. also a cell. Although, a leaf and the particles it hosts belong to
only one node so doing the P2M operator do not require any information
from another node. From that point, using the shared memory operator
makes sense.
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{M2M} \section{M2M}
During the upward pass information moves from a level to the upper one. During the upward pass information moves from a level to the upper
The problem in a distributed memory model is that one cell can exist in several trees i.e. in several nodes. one. The problem in a distributed memory model is that one cell can
Because the M2M operator computes the relation between a cell and its child, the nodes which have a cell in common need to share information. exist in several trees i.e. in several nodes. Because the M2M
Moreover, we have to decide which process will be responsible of the computation if the cell is present on more than one node. operator computes the relation between a cell and its child, the nodes
We have decided that the node with the smallest rank has the responsibility to compute the M2M and propagate the value for the future operations. which have a cell in common need to share information.
Despite the fact that others processes are not computing this cell, they have to send the child of this shared cell to the responsible node.
We can establish some rules and some properties of the communication during this operation. Moreover, we have to decide which process will be responsible of the
In fact, at each iteration a process never needs to send more than 7 cells, also a process never needs to receive more than 7 cells. computation if the cell is present on more than one node. We have
The shared cells are always at extremities and one process cannot be designed to be the responsible of more than one shared cell at a level. decided that the node with the smallest rank has the responsibility to
compute the M2M and propagate the value for the future operations.
Despite the fact that others processes are not computing this cell,
they have to send the child of this shared cell to the responsible
node.
We can establish some rules and some properties of the communication
during this operation. In fact, at each iteration a process never
needs to send more than 7 cells, also a process never needs to receive
more than 7 cells. The shared cells are always at extremities and one
process cannot be designed to be the responsible of more than one
shared cell at a level.
There are to cases :
\begin{itemize}
\item My first cell is shared means that I need to send the children I have of
this cell to the processus on my left.
\item My last cell is shared means that I need to receive some
children from the processus on my right.
\end{itemize}
\begin{figure}[h!] \begin{figure}[h!]
\begin{center} \begin{center}
\includegraphics[width=14cm, height=7cm, keepaspectratio=true]{ruleillu.jpg} \includegraphics[width=14cm, height=7cm, keepaspectratio=true]{ruleillu.jpg}
\caption{Potential Conflicts} \caption{Potential Conflicts}
\end{center} \end{center}
\end{figure} \end{figure}
\begin{algorithm}[H] \begin{algorithm}[H]
\restylealgo{boxed} \RestyleAlgo{boxed}
\linesnumbered \LinesNumbered
\SetLine \SetAlgoLined
\KwData{none} \KwData{none}
\KwResult{none} \KwResult{none}
\BlankLine \BlankLine
\For{idxLevel $\leftarrow$ $Height - 2$ \KwTo 1}{ \For{idxLevel $\leftarrow$ $Height - 2$ \KwTo 1}{
\ForAll{Cell c at level idxLevel}{ \ForAll{Cell c at level idxLevel}{
M2M(c, c.child)\; M2M(c, c.child)\;
} }
} }
\BlankLine \BlankLine
\caption{Traditional M2M} \caption{Traditional M2M}
\end{algorithm} \end{algorithm}
\begin{algorithm}[H] \begin{algorithm}[H]
\restylealgo{boxed} \RestyleAlgo{boxed}
\linesnumbered \LinesNumbered
\SetLine \SetAlgoLined
\KwData{none} \KwData{none}
\KwResult{none} \KwResult{none}
\BlankLine \BlankLine
\For{idxLevel $\leftarrow$ $Height - 2$ \KwTo 1}{ \For{idxLevel $\leftarrow$ $Height - 2$ \KwTo 1}{
\uIf{$cells[0]$ not in my working interval}{ \uIf{$cells[0]$ not in my working interval}{
isend($cells[0].child$)\; isend($cells[0].child$)\;
hasSend $\leftarrow$ true\; hasSend $\leftarrow$ true\;
} }
\uIf{$cells[end]$ in another working interval}{ \uIf{$cells[end]$ in another working interval}{
irecv(recvBuffer)\; irecv(recvBuffer)\;
hasRecv $\leftarrow$ true\; hasRecv $\leftarrow$ true\;
} }
\ForAll{Cell c at level idxLevel in working interval}{ \ForAll{Cell c at level idxLevel in working interval}{
M2M(c, c.child)\; M2M(c, c.child)\;
} }
\emph{Wait send and recv if needed}\; \emph{Wait send and recv if needed}\;
\uIf{hasRecv is true}{ \uIf{hasRecv is true}{
M2M($cells[end]$, recvBuffer)\; M2M($cells[end]$, recvBuffer)\;
} }
} }
\BlankLine \BlankLine
\caption{Distributed M2M} \caption{Distributed M2M}
\end{algorithm} \end{algorithm}
In the oct-tree, a cell or a leaf only exists if it has some children
or particles in. When the processus receive some cells, it need to
know their positions in the tree, because maybe one of the cells has
not be sent since it didn't exist.
The first thing to read from the buffer received is the heading, which
is a bit vector of length 8 (practically a char), indexing each cells
send.
Example :
\begin{tabular}{| c || c | c | c |}
\hline
Header & Datas & ... & Datas \\
\hline
00001011 & Datas of cell 5 & Datas of cell 7 & Datas of cell 8 \\
\hline
\end{tabular}
\clearpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{L2L} \section{L2L}
The L2L operator is very similar to the M2M. The L2L operator is very similar to the M2M. It is just the contrary,
It is just the contrary, a result hosted by only one node needs to be shared with every others nodes that are responsible of at least one child of this node. a result hosted by only one node needs to be shared with every others
nodes that are responsible of at least one child of this node.
The L2L operator fill child local array from parent local array, so
there is no need to precise wich cell is send, since it's the parent
cell that is send. Consequently, there is no need for a heading.
\BlankLine \BlankLine
\begin{algorithm}[H] \begin{algorithm}[H]
\restylealgo{boxed} \RestyleAlgo{boxed}
\linesnumbered \LinesNumbered
\SetLine \SetAlgoLined
\KwData{none} \KwData{none}
\KwResult{none} \KwResult{none}
\BlankLine \BlankLine
\For{idxLevel $\leftarrow$ 2 \KwTo $Height - 2$ }{ \For{idxLevel $\leftarrow$ 2 \KwTo $Height - 2$ }{
\uIf{$cells[0]$ not in my working interval}{ \uIf{$cells[0]$ not in my working interval}{
irecv($cells[0]$)\; irecv($cells[0]$)\;
hasRecv $\leftarrow$ true\; hasRecv $\leftarrow$ true\;
} }
\uIf{$cells[end]$ in another working interval}{ \uIf{$cells[end]$ in another working interval}{
isend($cells[end]$)\; isend($cells[end]$)\;
hasSend $\leftarrow$ true\; hasSend $\leftarrow$ true\;
} }
\ForAll{Cell c at level idxLevel in working interval}{ \ForAll{Cell c at level idxLevel in working interval}{
M2M(c, c.child)\; M2M(c, c.child)\;
} }
\emph{Wait send and recv if needed}\; \emph{Wait send and recv if needed}\;
\uIf{hasRecv is true}{ \uIf{hasRecv is true}{
M2M($cells[0]$, $cells[0].child$)\; M2M($cells[0]$, $cells[0].child$)\;
} }
} }
\BlankLine \BlankLine
\caption{Distributed L2L} \caption{Distributed L2L}
\end{algorithm} \end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
...@@ -271,49 +364,53 @@ To compute the P2P a leaf need to know all its direct neighbors. ...@@ -271,49 +364,53 @@ To compute the P2P a leaf need to know all its direct neighbors.
Even if the Morton indexing maximizes the locality, the neighbors of a leaf can be on any node. Even if the Morton indexing maximizes the locality, the neighbors of a leaf can be on any node.
Also, the tree used in our library is an indirection tree. Also, the tree used in our library is an indirection tree.
It means that only the leaves that contain particles are created. It means that only the leaves that contain particles are created.
That is the reason why when we know that a leaf needs another one on a different node, this other node may not realize this relation if this neighbor leaf do not exist on its own tree.
That is the reason why when we know that a leaf needs another one on a
different node, this other node may not realize this relation if this
neighbor leaf do not exist on its own tree.
At the contrary, if this neighbor leaf exists then the node wills require the first leaf to compute the P2P too. At the contrary, if this neighbor leaf exists then the node wills require the first leaf to compute the P2P too.
In our current version we are first processing each potential needs to know the communication we should need. In our current version we are first processing each potential needs to know the communication we should need.
Then the nodes do an all gather to inform each other how many communication they are going to send. Then the nodes do an all gather to inform each other how many communication they are going to send.
Finally they send and receive data in an asynchronous way and cover it by the P2P they can do. Finally they send and receive data in an asynchronous way and cover it by the P2P they can do.
\BlankLine \BlankLine
\begin{algorithm}[H] \begin{algorithm}[H]
\restylealgo{boxed} \RestyleAlgo{boxed}
\linesnumbered \LinesNumbered
\SetLine \SetAlgoLined
\KwData{none} \KwData{none}
\KwResult{none} \KwResult{none}
\BlankLine \BlankLine
\ForAll{Leaf lf}{ \ForAll{Leaf lf}{
neighborsIndexes $\leftarrow$ $lf.potentialNeighbors()$\; neighborsIndexes $\leftarrow$ $lf.potentialNeighbors()$\;
\ForAll{index in neighborsIndexes}{ \ForAll{index in neighborsIndexes}{
\uIf{index belong to another proc}{ \uIf{index belong to another proc}{
isend(lf)\; isend(lf)\;
\emph{Mark lf as a leaf that is linked to another proc}\; \emph{Mark lf as a leaf that is linked to another proc}\;
} }
} }
} }
\emph{all gather how many particles to send to who}\; \emph{all gather how many particles to send to who}\;
\emph{prepare the buffer to receive data}\; \emph{prepare the buffer to receive data}\;
\ForAll{Leaf lf}{ \ForAll{Leaf lf}{
\uIf{lf is not linked to another proc}{ \uIf{lf is not linked to another proc}{
neighbors $\leftarrow$ $tree.getNeighbors(lf)$\; neighbors $\leftarrow$ $tree.getNeighbors(lf)$\;
P2P(lf, neighbors)\; P2P(lf, neighbors)\;
} }
} }
\While{We do not have receive/send everything}{ \While{We do not have receive/send everything}{
\emph{Wait some send and recv}\; \emph{Wait some send and recv}\;
\emph{Put received particles in a fake tree}\; \emph{Put received particles in a fake tree}\;
} }
\ForAll{Leaf lf}{ \ForAll{Leaf lf}{
\uIf{lf is linked to another proc}{ \uIf{lf is linked to another proc}{
neighbors $\leftarrow$ $tree.getNeighbors(lf)$\; neighbors $\leftarrow$ $tree.getNeighbors(lf)$\;
otherNeighbors $\leftarrow$ $fakeTree.getNeighbors(lf)$\; otherNeighbors $\leftarrow$ $fakeTree.getNeighbors(lf)$\;
P2P(lf, neighbors + otherNeighbors)\; P2P(lf, neighbors + otherNeighbors)\;
} }
} }
\BlankLine \BlankLine
\caption{Distributed P2P} \caption{Distributed P2P}
\end{algorithm} \end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
...@@ -332,51 +429,51 @@ The algorithm can be viewed as several tasks: ...@@ -332,51 +429,51 @@ The algorithm can be viewed as several tasks:
\end{enumerate} \end{enumerate}
\BlankLine \BlankLine
\begin{algorithm}[H] \begin{algorithm}[H]
\restylealgo{boxed} \RestyleAlgo{boxed}
\linesnumbered \LinesNumbered
\SetLine \SetAlgoLined
\KwData{none} \KwData{none}
\KwResult{none} \KwResult{none}
\BlankLine \BlankLine
\ForAll{Level idxLeve from 2 to Height - 2}{ \ForAll{Level idxLeve from 2 to Height - 2}{
\ForAll{Cell c at level idxLevel}{ \ForAll{Cell c at level idxLevel}{
neighborsIndexes $\leftarrow$ $c.potentialDistantNeighbors()$\; neighborsIndexes $\leftarrow$ $c.potentialDistantNeighbors()$\;
\ForAll{index in neighborsIndexes}{ \ForAll{index in neighborsIndexes}{
\uIf{index belong to another proc}{ \uIf{index belong to another proc}{
isend(c)\; isend(c)\;
\emph{Mark c as a cell that is linked to another proc}\; \emph{Mark c as a cell that is linked to another proc}\;
}
}
}
}
\emph{Normal M2L}\;
\emph{Wait send and recv if needed}\;
\ForAll{Cell c received}{
$lightOctree.insert( c )$\;
}
\ForAll{Level idxLeve from 2 to Height - 1}{
\ForAll{Cell c at level idxLevel that are marked}{
neighborsIndexes $\leftarrow$ $c.potentialDistantNeighbors()$\;
neighbors $\leftarrow$ lightOctree.get(neighborsIndexes)\;
M2L( c, neighbors)\;
} }
} }
\BlankLine }
\caption{Distributed M2L} }
\emph{Normal M2L}\;
\emph{Wait send and recv if needed}\;
\ForAll{Cell c received}{
$lightOctree.insert( c )$\;
}
\ForAll{Level idxLeve from 2 to Height - 1}{
\ForAll{Cell c at level idxLevel that are marked}{
neighborsIndexes $\leftarrow$ $c.potentialDistantNeighbors()$\;
neighbors $\leftarrow$ lightOctree.get(neighborsIndexes)\;
M2L( c, neighbors)\;
}
}
\BlankLine
\caption{Distributed M2L}
\end{algorithm} \end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{thebibliography}{9} \begin{thebibliography}{9}
\bibitem{itpc03} \bibitem{itpc03}
Ananth Grama, George Karypis, Vipin Kumar, Anshul Gupta, Ananth Grama, George Karypis, Vipin Kumar, Anshul Gupta,
\emph{Introduction to Parallel Computing}. \emph{Introduction to Parallel Computing}.
Addison Wesley, Massachusetts, Addison Wesley, Massachusetts,
2nd Edition, 2nd Edition,
2003. 2003.
\bibitem{ptttplwaefmm11} \bibitem{ptttplwaefmm11}
I. Kabadshow, H. Dachsel, I. Kabadshow, H. Dachsel,
\emph{Passing The Three Trillion Particle Limit With An Error-Controlled Fast Multipole Method}. \emph{Passing The Three Trillion Particle Limit With An Error-Controlled Fast Multipole Method}.
2011. 2011.
\end{thebibliography} \end{thebibliography}
\end{document} \end{document}
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment