Commit f82a0365 authored by Glen Mével's avatar Glen Mével
Browse files

introduce SL and CSL, biblio

parent 0d560c2c
\subsubsection{Specification under sequential consistency}
\label{sec:queue:spec:sc}
We now consider the case where several threads access the queue concurrently. Let’s assume for now that
the memory model is sequentially consistent. % FP ajouter une référence?
We now consider the situation where several threads can access the queue concurrently.
Let us assume for now that the memory model is sequentially consistent~\cite{lamport-79}.
%
We may retain the exact same specification as is presented
\citet{ohearn-07} devises \CSL, an extension of \SL that enables to reason
about programs where several threads can access the same piece of memory,
through the use of ``conditional critical regions''.
This logic spawned a variety of extensions; in this work, we use
Iris~\cite{iris}, a modular framework for building separation logics.
% TODO: say a few words about Iris here? motivation for using it?
In a derivative of \CSL, we may retain the exact same specification as is presented
in~\fref{fig:queue:specseq}, recalling that $\isQueueSEQ\elemList$ is an
\emph{exclusive} resource.
% TODO: vérifier qu’on l’a bien dit précédemment, sinon ce n’est pas un rappel…
%
A thread that has this resource could safely assume to be the only one allowed
exclusive assertion:
a thread that has this assertion could safely assume to be the only one allowed
to operate on the queue.
%
The resource would be transferable between threads via some mean of
The assertion would be transferable between threads via some means of
synchronization: for instance, we may use a lock to guard all operations on the
shared queue.
% FP noter que c'est l'approche de la logique de séparation concurrente
% traditionnelle, pioneered by \citet{ohearn-07}
shared queue, following the approach of \CSL.
%
However this coarse grain concurrency has a run-time penalty,
and it also creates some contention on the use of the queue.
%
These costs are unnecessary because the implementation that we wish to prove correct is already thread-safe.
Hence we can achieve finer-grain concurrency, where operating on a queue does
not require its exclusive ownership. In this context, it is important that the
ownership of the queue can be shared among several threads.
However the implementation that we wish to prove correct is already thread-safe.
%Both $\enqueue$ and $\dequeue$ have a linearization point.
% FP Je ne sais pas forcément ce que signifie "linearization point",
% surtout que l'on parle d'une structure qui n'est pas (forcément)
% linéarisable, me semble-t-il. Supprimer cette ligne?
Hence we can achieve finer-grain concurrency, where operating on a queue does
not require its exclusive ownership. In this context, it is important that the
ownership of the queue can be shared among several threads.
%% the queue data~structure
%% will be described by an \emph{invariant}. It is a persistent assertion, that can
......@@ -112,10 +119,12 @@ a queue: the same specification could also describe a stack or a bag.
Another option is to keep exposing a specification akin to that of~\fref{fig:queue:specseq}, that relies on an exclusive representation predicate allowing for the most exact description of the public state, and let clients share it themselves by applying a technique commonly used in Iris-like logics%
%to keep a precise description of the state that is stable in the face of interference by other threads
: one would typically place the exclusive ownership of the queue in an invariant which is agreed upon by all the threads.
: one would typically place the exclusive ownership of the queue in an \emph{invariant}.
An invariant is an assertion which is agreed upon by all threads, and is owned by anyone; it remains true forever.
In this invariant, the client would also express which properties about the public state their particular application needs.
Then, when one of the threads needs to access the shared resource, it can \emph{open} the invariant, perform the shared state operations it needs, reestablish the invariant, and finally close it.
However, in order to be sound in the face of concurrency, the use of invariants in Iris needs to follow a strict constraint: they can only stay opened for an \emph{atomic} duration (i.e., during an atomic instruction, or be closed immediately after being opened).
However, in order to be sound in the face of concurrency, the use of invariants in Iris needs to obey a strict constraint: they can only stay opened for an \emph{atomic} duration (i.e.~they must be closed immediately or after the execution of an atomic instruction).
% TODO: define (informally) what is an "atomic instruction" ?
Unfortunately, $\enqueue$ and $\dequeue$ perform complex operations and are not atomic.
The idea of logical atomicity~\cite[\S7]{jung-slides-2019,iris-15} aims at addressing that difficulty.
......@@ -125,7 +134,7 @@ The core difference is that they allow to open invariants around the triple: in
The definition of logically atomic triples is further discussed in
% TODO : référencer la section
and given with more all details in previous work~\cite[\S7]{jung-slides-2019,iris-15}.
In order to get a good intuition, let's consider an approximated definition of that concept: a logically atomic triple $\lahoare{P}{e}{Q}$ states, roughly, that the expression $e$ contains a linearization point which has $P$ as a precondition and $Q$ as a postcondition.
In order to get a good intuition, let us consider an approximated definition of that concept: a logically atomic triple $\lahoare{P}{e}{Q}$ states, roughly, that the expression $e$ contains a linearization point which has $P$ as a precondition and $Q$ as a postcondition.
This linearization point being atomic, it allows the opening of invariants just like atomic instructions.
%
\begin{mathpar}
......
\subsubsection{Specification in a sequential setting}
\label{sec:queue:spec:seq}
A queue~$\queue$ holding $\nbelems$ items \(\elemList*\), where the left extremity
of the list is the tail and the right extremity is the head,
is represented in separation logic with a assertion:
Let us start by assuming a sequential setting. We can then use standard
\SL~\cite{reynolds-02} to reason about programs and the resources they manipulate.
%
In \SL, a queue~$\queue$ holding $\nbelems$ items \(\elemList*\),
where the left extremity of the list is the tail and the right extremity is the head,
can be represented with an assertion:
%
\[
\isQueueSEQ \elemList
\]
In a sequential setting, we can give a simple specification to the operations of
the queue, as presented in~\fref{fig:queue:specseq}. We use the standard Hoare
As is usual in \SL, this \emph{representation predicate} asserts the unique
ownership of the entire data~structure. When holding it, we can safely
manipulate the queue without risk of invalidating other assertions about
resources that may alias parts of our queue. In particular, the representation
predicate cannot be duplicated; we say that it is \emph{exclusive}.
The operations of the queue admit a simple sequential specification which is
presented in~\fref{fig:queue:specseq}. We use the standard Hoare
triple notation, with a partial correctness meaning.
% TODO: ref for Hoare triples?
The function~$\make$ has no prerequisite and gives us the ownership of a new empty queue.
%
If we own a queue~$\queue$, then we can $\enqueue$ some item~$\elem$
into~$\queue$; if this operation ever returns, then it must return the unit value~$\Unit$ and
give us back the ownership of~$\queue$, where $\elem$ has been appended at the head.
%
Conversely, if we own a queue~$\queue$, then we can $\dequeue$ from~$\queue$; if
this operation ever returns, then it must return the first item~$\elem_0$ found at
the tail of~$\queue$, and it gives us back the ownership of~$\queue$, where that
first item has been removed.
The asterisk~$(\ast)$ is the separating conjunction.
% GLEN: in fact we do not really need the separating conjunction at that point.
\begin{itemize}
\item
The function~$\make$ has no prerequisite and gives us the ownership of a new
empty queue.
\item
If we own a queue~$\queue$, then we can $\enqueue$ some item~$\elem$ into~$\queue$;
if this operation ever returns, then it must return the unit value~$\Unit$
and give us back the ownership of~$\queue$, where $\elem$ has been appended at the head.
\item
Conversely, if we own a queue~$\queue$, then we can $\dequeue$ from~$\queue$;
if this operation ever returns, then it must return the first item~$\elem_0$
found at the tail of~$\queue$, and it gives us back the ownership of~$\queue$,
where that first item has been removed.
\end{itemize}
This specification implies that $\dequeue$ cannot possibly return when the queue is empty
($\nbelems = 0$); in this case, it must loop forever.
......
......@@ -2,7 +2,7 @@
% Specification of queues in CSL.
\subsection{Specification of Queues in Concurrent Separation Logic}
\subsection{Specification of Queues in a Concurrent Separation Logic}
\label{sec:queue:spec}
\input{queue-spec-seq}
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment