Commit ef183787 authored by Glen Mével's avatar Glen Mével
Browse files

swap spec and impl again: put spec before impl

parent 6cd958a1
......@@ -10,8 +10,12 @@
% TODO: find where this implementation was first described
% ref: https://github.com/rigtorp/MPMCQueue
\fref{fig:queue:impl} shows the
implementation of the bounded MRMW queue in our \mocaml{} fragment. It uses a ring buffer of
We now present an implementation of a bounded MRMW queue, that satisfies the
specification devised in the previous section.
% TODO: \sref explicite ?
Its code appears in \fref{fig:queue:impl}; it is written in our \mocaml{} fragment.
The queue uses a ring buffer of
fixed capacity~$\capacity \geq 1$. The buffer is represented by two arrays of
length~\capacity, \refstatuses and \refelements; in each slot (whose offset ranges from~$0$
included to~$\capacity$ excluded), the buffer thus stores a \emph{status} in an
......
......@@ -7,7 +7,7 @@ Let us assume for now that the memory model is sequentially consistent~\cite{lam
\citet{ohearn-07} devises \CSL, an extension of \SL that enables to reason
about programs where several threads can access the same piece of memory,
through the use of ``conditional critical regions''.
This logic spawned a variety of extensions; in this work, we use
This logic spawned a variety of descendants; in this work, we use
Iris~\cite{iris}, a modular framework for building separation logics.
In a derivative of \CSL, we may retain the exact same specification as is presented
......@@ -23,7 +23,11 @@ shared queue, following the approach of \CSL.
However this coarse grain concurrency has a run-time penalty,
and it also creates some contention on the use of the queue.
%
These costs are unnecessary because the implementation that we wish to prove correct is already thread-safe.
These costs are often unnecessary, as many data~structures are designed
specifically to support concurrent accesses. In this paper, as stated, we wish
to prove the correctness of a MRMW~queue implementation, which should thus
ensure, by itself, thread safety.
%
Hence we can achieve finer-grain concurrency, where operating on a queue does
not require its exclusive ownership. In this context, it is important that the
ownership of the queue can be shared among several threads.
......@@ -105,7 +109,7 @@ An invariant is an assertion which is agreed upon by all threads, and is owned b
In this invariant, the client would also express which properties about the public state their particular application needs.
Then, when one of the threads needs to access the shared resource, it can \emph{open} the invariant, perform the shared state operations it needs, reestablish the invariant, and finally close it.
However, in order to be sound in the face of concurrency, the use of invariants in Iris needs to obey a strict constraint: they can remain open during at most a single step of execution.
Unfortunately, $\enqueue$ and $\dequeue$ perform complex operations that take more than a step of execution.
Unfortunately, $\enqueue$ and $\dequeue$ perform complex operations that take more than one step of execution.
The idea of logical atomicity~\cite[\S7]{jung-slides-2019,iris-15} aims at addressing that difficulty.
When using this concept, instead of using ordinary Hoare triples, we use \emph{logically atomic triples}.
......
......@@ -17,26 +17,26 @@ But, in \hlog{}, invariants are restricted to a special class of assertions, cal
As a result, the specification shown in~\fref{fig:queue:specsc} is useful in a weakly consistent context only if the representation predicate $\isQueueSC\elemList$ is objective.
Hence, the first addition in our specification to take into account weakly consistent behaviors is the objectiveness of $\isQueueSC\elemList$.
This correspond to asserting that the use of atomic accesses in the concurrent queue library is sufficient to maintain its linearizability.
This corresponds to asserting that the use of atomic accesses in the concurrent queue library is sufficient to maintain its linearizability.
In a weakly consistent context, the concurent queue library is not only linearizable: calls to the concurrent queue library also provides guarantees for memory accesses \emph{outside} of the queue.
In particular, our library guarantees \emph{happens-before} relationships between some function calls:
In a weakly consistent context, the concurent queue library under study is not only linearizable: its operations also provide guarantees for memory accesses \emph{outside} of the queue.
Specifically, our library guarantees \emph{happens-before} relationships between some function calls:
\begin{enumerate}%
\item from an enqueuer to the dequeuer that obtains the corresponding item;
\item from an enqueuer to the following enqueuers;
\item from a dequeuer to the following dequeuers.
\end{enumerate}%
These happens-before relationships are important to use the queue in practice.
For example, imagine that thread~$A$ enqueues a pointer to a complex data structure in the queue (say, a hash table).
These happens-before relationships are important for practical uses of a queue.
For example, imagine that thread~$A$ enqueues a pointer to a complex data structure (say, a hash table).
Then, when thread~$B$ dequeues the corresponding item, one naturally expects that thread~$B$ is able to access the data structure as if it were its unique owner (provided, of course, that thread~$A$ did not access the data structure once enqueued).
In a weakly consistent memory model, this is only possible if there is a happens-before relationship between enqueuing and dequeuing: otherwise, nothing guarantees that thread~$B$ has seen all the changes made by thread~$A$ to the data structure.
In \hlog{}, happens-before relationships can be expressed as \emph{transfer of views}.
In this logic, views (noted in this paper using calligraphic capital letters, such as $\mathcal{T,H,V,S}$) are abstract values denoting the knowledge of a thread of the current state of the global shared memory.
They are equipped with a lattice structure: larger views correspond to more recent knowledge.
They are equipped with a lattice structure: larger views corresponds to more recent knowledge.
In \hlog{}, when $\view$ is a view, the persistent assertion $\seen\view$ denotes that the current thread has the knowledge contained in $\view$.
In order to specify a happens-before relationship between two program points of two threads, we simply have to give the client the ability to transfer any such assertion between the two program points: this correspond to saying that the destination program point has all the knowledge the source program point had on the shared memory.
In order to specify a happens-before relationship between two program points of two threads, we simply have to give the client the ability to transfer any such assertion between the two program points: this corresponds to saying that the destination program point has all the knowledge the source program point had on the shared memory.
As is usual in \hlog{}~\cite{mevel-jourdan-pottier-20} and seen later with the example of the pipeline, % FP pointeur?
this can be used for transferring subjective resources from a sender to a receiver.
......@@ -62,18 +62,18 @@ the representation predicate now takes more parameters:
\end{enumerate}%
The queue, however, does not guarantee any happens-before relationship from
The queue that we study, however, does not guarantee any happens-before relationship from
a dequeuer to an enqueuer.%
%
\footnote{%
This is not entirely true: the implementation that we study does create
This is not entirely true: the implementation shown in \sref{sec:queue:impl} does create
a happens-before relationship from the dequeuer of rank~$\idx$ to the enqueuer of
rank~$\idx+\capacity$ (hence also to all enqueuers of subsequent ranks).
We choose to not reveal this in the specification, since the constant $C$ is hidden from the specification.
We choose to not reveal this in the specification, since the specification hides the constant~$\capacity$.
}
%
Hence, by contrast with a sequential queue that would
be guarded by a lock, the concurrent queue is not linearizable, even though it
be guarded by a lock, this concurrent queue is not linearizable, even though it
retains an interesting behavior that suffices for many use cases.
% FP ce serait bien de donner un exemple de scénario où le comportement
% de la queue est conforme à la spec et néanmoins non linéarisable;
......
......@@ -16,10 +16,11 @@ or both.
%% Despite not being fully general, such implementations have use cases:
%% work~stealing, for example, is commonly achieved using multiple-reader,
%% single-writer queues.
%
In this paper we study an implementation of a \emph{multiple-reader, multiple-writer}
(MRMW) queue, that is, a queue in which any number of threads are allowed to
enqueue and dequeue.
In addition, this queue is \emph{bounded}, that is, it occupies no more than a
fixed memory size. A motivation for that trait is that items may be enqueued
more frequently than they are dequeued; in this situation, a bounded queue has no risk of
......@@ -31,6 +32,6 @@ enqueuing either blocks or fails.
% une fois que la file est pleine; donc c'est un mécanisme de contrôle
% de flux.
\input{queue-impl}
\input{queue-spec}
\input{queue-impl}
\input{queue-proof}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment