Commit 1804156d authored by berenger-bramas's avatar berenger-bramas

Correct and update the ParallelDetails.tex

git-svn-id: svn+ssh://scm.gforge.inria.fr/svn/scalfmm/scalfmm/trunk@172 2616d619-271b-44dc-8df4-d4a8f33a7222
parent 3d859277
......@@ -24,3 +24,4 @@ tmp/
*.toc
*.log
*.aux
*.brf
No preview for this file type
\documentclass[10pt,letterpaper,titlepage]{report}
\documentclass[12pt,letterpaper,titlepage]{report}
\usepackage{algorithm2e}
\usepackage{hyperref}
\usepackage{listings}
\usepackage{geometry}
\usepackage{graphicx}
% use pdflatex ParallelDetails.tex
\usepackage[hypertexnames=false, pdftex]{hyperref}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% use pdflatex ParallelDetails.tex
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\author{Berenger Bramas}
\title{ScalFmm - Parallel Algorithms (Draft)}
\date{August, 2011}
%% Package config
\lstset{language=c++, frame=lines}
%% ou \addtolength{\voffset}{-1.5cm} \addtolength{\textheight}{2cm}
%% \addtolength{\hoffset}{-1cm} \addtolength{\textwidth}{2cm}
\restylealgo{boxed}
\geometry{scale=0.8, nohead}
\hypersetup{ colorlinks = true, linkcolor = black, urlcolor = blue, citecolor = blue }
%% Remove introduction numbering
\setcounter{secnumdepth}{-1}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\maketitle{}
\newpage
\tableofcontents
\newpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
In this document we introduce the principles and the algorithms used in our library to run in a distributed environment using MPI.
The algorithms in this document may not be up to date comparing to those used in the code.
We advice to check the version of this document and the code to have the lastest available.
We advise to check the version of this document and the code to have the latest available.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Building the tree in Parallel}
\section{Description}
The main motivation to create a distributed version of the FMM is to run large simulation.
These simulations contains more particles than a computer can host which involves to use several computers.
Moreover, that is the reason why it is not reasonable to ask a master process to load an entire file and to dispatch the data to others processes. Without being able to know the entire tree it may send randomly the data to the slaves.
To override this situation, we use a solution that can be viewed as a two steps process.
The main motivation to create a distributed version of the FMM is to run large simulations.
These ones contain more particles than a computer can host which involves using several computers.
Moreover, it is not reasonable to ask a master process to load an entire file and to dispatch the data to others processes. Without being able to know the entire tree it may send randomly the data to the slaves.
To override this situation, our solution can be viewed as a two steps process.
First, each node loads a part of the file to possess several particles.
After this task, each node can compute the Morton index for the particles he loaded.
After this task, each node can compute the Morton index for the particles he had loaded.
The Morton index of a particle depends of the system properties but also of the tree height.
If we want to choose the tree height and the number of nodes at run time then we cannot pre-process the file.
The second step is a parallel sort based on the Morton index between all nodes with a balancing operation at the end.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Load a file in parallel}
We use the MPI $I/O$ functions to split a file between all the mpi processes.
The prerequisite to make the spliting easier is to have a binary file.
Thereby, using a very basic formula each node know which part of the file it needs to load.
The prerequisite to make the splitting easier is to have a binary file.
Thereby, using a very basic formula each node knows which part of the file it needs to load.
\begin{equation}
size per proc \leftarrow \left (file size - header size \right ) / nbprocs
\end{equation}
......@@ -67,10 +57,8 @@ size per proc \leftarrow \left (file size - header size \right ) / nbprocs
offset \leftarrow header size + size per proc .\left ( rank - 1 \right )
\end{equation}
\newline
The MPI $I/O$ functions use a view model to manage data access.
We first construct a view using the function MPI\_File\_set\_view and then read the data with MPI\_File\_read as described in the fallowing $C++$ code.
\begin{lstlisting}
// From FMpiFmaLoader
particles = new FReal[bufsize];
......@@ -78,32 +66,28 @@ MPI_File_set_view(file, headDataOffSet + startPart * 4 * sizeof(FReal),
MPI_FLOAT, MPI_FLOAT, const_cast<char*>("native"), MPI_INFO_NULL);
MPI_File_read(file, particles, bufsize, MPI_FLOAT, &status);
\end{lstlisting}
Our files are composed by a header fallowing by all the particles.
The header enable to check the precision of the file.
Finally, a particles is represented by four values: a position and a physical value.
The header enables to check several properties as the precision of the file.
Finally, a particle is represented by four decimal values: a position and a physical value.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Sorting the particles}
Once each node has a set of particles we need to sort them.
This problem boils down to a simple parallel sort where Morton index are used to compare particles.
\subsection{Using QuickSort}
A first approach is to use a famous sorting algorithm.
We choose to use the quick sort algorithm because the distributed and the shared momory approches are mostly similar.
Our implementation is based on the implementation describe in \cite{itpc03}.
We choose to use the quick sort algorithm because the distributed and the shared memory approaches are mostly similar.
Our implementation is based on the algorithm described in \cite{itpc03}.
The efficiency of this algorithm depends roughly of the pivot choice.
In fact, a wrong idea of the parallel quick sort is to think that each process first sort their particles and second use a merge sort to share their results.
Instead, the nodes choose a pivot and do one quick sort iteration together.
From that point all process has an array with a left part where all value are lower than the pivot and a right part where all values are upper or equal than the pivot.
In fact, a wrong idea of the parallel quick sort is to think that each process first sort their particles using quick sort and then use a merge sort to share their results.
Instead, the nodes choose a pivot and progress for one quick sort iteration together.
From that point all process has an array with a left part where all values are lower than the pivot and a right part where all values are upper or equal than the pivot.
Then, the nodes exchange data and some of them will work on the lower part and the other on the upper parts until there is one process for a part.
At this point, the process perform a shared memory quick sort.
At this point, the process performs a shared memory quick sort.
To choose the pivot we tried to use an average of all the data hosted by the nodes:
\newline
\begin{algorithm}[H]
\linesnumbered
\SetLine
\KwResult{A Morton index as next iteration pivot}
\BlankLine
......@@ -114,86 +98,72 @@ pivot $\leftarrow$ Sum(allFirstIndexes) / nbprocs\;
\BlankLine
\caption{Choosing the QS pivot}
\end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Using an intermediate Octree}
The second approach uses an octree to sort the particles in each process instead of a sorting algorithm.
The time complexity is equivalent but it needs more memory since it is not done in place.
After inserting the particles in the tree, we can iterate at the leaves level and access to the particles in an ordered way.
Then, the processes are doing a minimum and a maximum reduction to know the real Morton interval of the system.
By building the system interval in term of Morton index, the nodes cannot know the data scattering.
Finally, the processes split the interval in a uniform manner and exchange data with $P^{2}$ communication in the worse case.
Finally, the processes split the interval in a uniform manner and exchange data with $P^{2}$ communication in the worst case.
\newline
\newline
In both approaches the data may not be balanced at the end.
In fact, the first method is pivot dependant and the second consider that the data are uniform.
In fact, the first method is pivot dependent and the second consider that the data are uniformly distributed.
That is the reason why we need to balance the data among nodes.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Balancing the leaves}
After sorting, each process has potentialy several leaves.
If we have to processes $P_{i}$ and $P_{j}$ with $i < j$ the sort guarantes that all leaves from node i are inferior than the leaves on the node j in a Morton indexing way.
After sorting, each process has potentially several leaves.
If we have two processes $P_{i}$ and $P_{j}$ with $i < j$ the sort guarantees that all leaves from node i are inferior than the leaves on the node j in a Morton indexing way.
But the leaves are randomly distributed among the nodes and we need to balance them.
Our idea is to use a two passes algorithm describes as a sand settling:
\begin{enumerate}
\item We can see each node as a heap of sand.
This heap represents the leaves of the octree.
Some nodes have lot of leaves and then have a big heaps.
On the contraray, some other have small heaps.
Some nodes have lot of leaves and then are a big heap.
On the contrary, some are small heaps because composed by a few leaves.
Starting from the extremities, each node can know the sand there is on its left and on its right.
\item Because each node knows the total among of sand in the system which is the sum of the sand there is on each of its sides plus its own sand it can compute a balancing calculs.
\item Because each node knows the total among of sand in the system which is the sum of the sand there is on each of its sides plus its own sand it can compute a balancing calculus.
What should happen if we put a heavy plank above all our sand heaps?
Well the system should balance until all heaps have the same size.
The same happens here, each node can know what to do to balance the system.
Each node communicate only with its two neighbors by sending or receiving entire leaves.
Each node communicates only with its two neighbors by sending or receiving entire leaves.
\end{enumerate}
At the end of the algorithm our system is completly balanced with the same number of leaves on each processes.
At the end of the algorithm our system is completely balanced with the same number of leaves on each process.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=15cm, height=20cm, keepaspectratio=true]{SandSettling.png}
\includegraphics[width=14cm, height=17cm, keepaspectratio=true]{SandSettling.png}
\caption{Sand Settling Example}
\end{center}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Simple operators: P2M, M2M, L2L}
We present the different FMM operators in two separated parts depending on their parallel complexity.
In this first part we present the three simpliest operators P2M, M2M and L2L.
Their simplicity is explain by the possible prediction to know which node hosts a cell and how to organized the communication.
In this first part we present the three simplest operators P2M, M2M and L2L.
Their simplicity is explained by the possible prediction to know which node hosts a cell and how to organize the communication.
\section{P2M}
The P2M still unchanged from the sequential approach to the distributed memory algorithm.
In fact, in the sequential model we compute a P2M between all particles of a leaf and this same leaf which is also a cell.
In fact, in the sequential model we compute a P2M between all particles of a leaf and this leaf which is also a cell.
Although, a leaf and the particles it hosts belong to only one node so doing the P2M operator do not require any information from another node.
From that point, using the shared memory operator makes sense.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{M2M}
During the upward pass information moves from a level to the upper one.
The problem in a distributed memory model is that one cell can exist in several trees i.e. in several nodes.
Because the M2M operator compute the relation between a cell and its child, the nodes which have a cell in commom need to share information.
Because the M2M operator computes the relation between a cell and its child, the nodes which have a cell in common need to share information.
Moreover, we have to decide which process will be responsible of the computation if the cell is present on more than one node.
We have decided that the node with the smallest rank has the responsibility to compute the M2M and propagate the value for the future operations.
Despite the fact that others processes are not computing this cell, they have to send the child of this shared cell to the responsible node.
Concentring on that problem enable to establish some rules.
We can establish some rules and some properties of the communication during this operation.
In fact, at each iteration a process never needs to send more than 8-1 cells, also a process never needs to receive more than 8-1 cells.
Also, the shared cells are always at extremities and one process cannot be designed to be the responsible of more than one shared cell at a level.
The shared cells are always at extremities and one process cannot be designed to be the responsible of more than one shared cell at a level.
\begin{algorithm}[H]
\restylealgo{boxed}
\linesnumbered
\SetLine
\KwData{none}
\KwResult{none}
......@@ -206,9 +176,9 @@ Also, the shared cells are always at extremities and one process cannot be desig
\BlankLine
\caption{Traditional M2M}
\end{algorithm}
\begin{algorithm}[H]
\restylealgo{boxed}
\linesnumbered
\SetLine
\KwData{none}
\KwResult{none}
......@@ -233,15 +203,15 @@ Also, the shared cells are always at extremities and one process cannot be desig
\BlankLine
\caption{Distributed M2M}
\end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{L2L}
The L2L operator is very similar to the M2M.
On the contrary that a result hosted by only one node need to be shared with every other nodes that are responsible of some child of this node.
It is just the contrary, a result hosted by only one node needs to be shared with every others nodes that are responsible of at least one child of this node.
\BlankLine
\begin{algorithm}[H]
\restylealgo{boxed}
\linesnumbered
\SetLine
\KwData{none}
\KwResult{none}
......@@ -266,26 +236,30 @@ On the contrary that a result hosted by only one node need to be shared with eve
\BlankLine
\caption{Distributed L2L}
\end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Complex operators: P2P, M2L}
This two operators are more complex than the ones presented in the previous chapter.
In fact, they require a gather and a pre-process task that take time.
These two operators are more complex than the ones presented in the previous chapter.
In fact, it is very difficult to predict the communication between nodes.
Each step requires pre-processing to know what are the potential communications and a gather to inform other about the needs.
\section{P2P}
To process the P2P we need the neighbors of all our leaves.
But this neighbors can be potentially hosted by any other process because of the Morton indexing.
Also, our tree is an indirection tree, so a leaf may not exist because there is particles at this place.
From that description of the problem we have implemented the fallowing algorithm.
To compute the P2P a leaf need to know all its direct neighbors.
Even if the Morton indexing maximizes the locality, the neighbors of a leaf can be on any node.
Also, the tree used in our library is an indirection tree.
It means that only the leaf that contains particles is created.
That is the reason why when we know that a leaf needs another one on a different node, this other node may not realize this relation if this neighbor leaf do not exist on its own tree.
At the contrary, if this neighbor leaf exists then the node wills require the first leaf to compute the P2P too.
In our current version we are first processing each potential needs to know the communication we should need.
Then the nodes do an all gather to inform each other how many communication they are going to send.
Finally they send and receive data in an asynchronous way and cover it by the P2P they can do.
\BlankLine
\begin{algorithm}[H]
\restylealgo{boxed}
\linesnumbered
\SetLine
\KwData{none}
\KwResult{none}
\BlankLine
\ForAll{Leaf lf}{
neighborsIndexes $\leftarrow$ $lf.potentialNeighbors()$\;
\ForAll{index in neighborsIndexes}{
......@@ -312,30 +286,28 @@ From that description of the problem we have implemented the fallowing algorithm
P2P(lf, neighbors + otherNeighbors)\;
}
}
\BlankLine
\caption{Distributed P2P}
\end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{M2L}
The M2L operator is relatively similar to the P2P.
Hence P2P is done at the leaves level, M2L is done from several levels from Height - 2 to 2.
At each level we need to have access to neighbors of the cells we are proprietary and those ones can be hosted by any processes.
But we can process most of the cells without the need of data from other processes.
That is the reason why we work with several tasks:
Hence P2P is done at the leaves level, M2L is done on several levels from Height - 2 to 2.
At each level, a node needs to have access to all the distant neighbors of the cells it is the proprietary and those ones can be hosted by any other node.
Anyway, each node can compute a part of the M2L with the data it has.
The algorithm can be viewed as several tasks:
\begin{enumerate}
\item Compute to know what data has to be sent
\item Do all the computation we can without the data from others
\item Wait sendings and receptions
\item All gather to know what data has to be received
\item Do all the computation we can without the data from other nodes
\item Wait $send/receive$
\item Compute M2L with the data we received
\end{enumerate}
Then the algorithm is detailled in the fallowing figure:
\BlankLine
\begin{algorithm}[H]
\restylealgo{boxed}
\linesnumbered
\SetLine
\KwData{none}
\KwResult{none}
......@@ -366,19 +338,16 @@ Then the algorithm is detailled in the fallowing figure:
\BlankLine
\caption{Distributed M2L}
\end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{thebibliography}{9}
\bibitem{itpc03}
Ananth Grama, George Karypis, Vipin Kumar, Anshul Gupta,
\emph{Introduction to Parallel Computing}.
Addison Wesley, Massachusetts,
2nd Edition,
2003.
\end{thebibliography}
\end{document}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment