Part 4 - Lecture 1

parent 0f0196fc
......@@ -159,7 +159,7 @@ The deadline is **Friday, February 16, 2018**.
### Dependently-typed Functional Programming
* Effectful functional programming.
* [Effectful functional programming](slides/pedagand-01.pdf) ([Source](agda/01-effectful/Monad.lagda.rst)).
* Dependent functional programming.
* Total functional programming.
* Generic functional programming.
{-# OPTIONS --allow-unsolved-metas #-}
module 01-effectful.Monad where
Effectful functional programming
Didier Rémy's lecture:
- *extend* programming language with effects
(mutable references, exceptions, etc.)
- *operational* semantics: mimic the behavior of a machine
What if:
- we want a denotational model/semantics? (compositionality!)
- we want a *pure* language? (eg. Haskell)
- we cannot afford an impure language? (eg. a proof assistant)
Today, we shall:
- give a denotational model for a monomorphic memory cell
- present a general framework for modelling *algebraic* effects
- meet a few monads (reader, writer, non-determinism, random, CPS, etc.)
`Notions of computations determine monads`_:
1. syntax of effectful operations
2. equational theory of effectful operations
3. denotational semantics = quotiented syntax
4. interpreter / monad!
Historical background
`Notions of computations and monads`_, Eugenio Moggi (1991):
- "generic" syntax for writing effectful programs
- denotational model based on a categorical notion
- key idea: interpretation in the `Kleisli category`_ of a `monad`_
- compositionality: that's what a `category`_ is for!
`The essence of functional programming`_, Phil Wadler (1992):
- bring monads to "mainstream" programming
- key idea : program *in* the denotational model
- Scottish cottage industry: the state monad, exception monad, list monad, CPS monad, etc.
Longstanding open problem:
- combining effects
- example: stateful *and* exceptionful program
- what is the "composition" of the state and exception monad?
- how to handle their interaction?
- example: upon an exception, do we roll-back the state, or not?
`Notions of computations determine monads`_, Power & Plotkin (2002):
- describe syntax through an algebraic signature
- specify semantics through an equational theory
- obtain (some) monads from a more primitive presentation: a `Lawvere theory`_
- key idea: syntax can be combined (trivially), leave the user pick an appropriate semantics
`Segal condition meets computational effects`_, Melliès (2010):
- abstract non-sense does not run programs
- concretely, how do we get the state monad from its equations?
- key idea: use good ol' rewriting theory
- good idea: sitting next to Tarmo Uustalu in a conference
- algebraic effects are `all <>`_ `the <>`_ `rage <>`_
- `Even OCaml has some <>`_
- Open problems (probably): comparing expressive power & modular compilation
module Monad where
open import Level hiding (suc)
open import Data.Unit hiding (setoid ; _≟_)
open import Data.Nat renaming (_*_ to _*ℕ_)
open import Data.Nat.DivMod
open import Data.Fin hiding (_+_ ; raise ; _-_)
open import Data.Product
open import Function
open import Relation.Nullary
open import Relation.Binary
open import Relation.Binary.PropositionalEquality
renaming (trans to trans≡ ; sym to sym≡ ; cong to cong≡ ; cong₂ to cong₂≡)
hiding (setoid ; isEquivalence)
Stateful operations
Let ``S`` be a Set. In this section, we are interested in programs
manipulating a single reference cell of monomorphic type ``S``.
module StateMonad where
S : Set
S = ℕ
Such stateful programs are built from two state-manipulating operations:
* ``get``, which returns the current state
* ``set``, which updates the current state
We formalize this intuition with the following signature::
data ΣState (X : Set) : Set where
`get : ⊤ × (S → X) → ΣState X
`set : S × (⊤ → X) → ΣState X
**Exercise (difficulty: 1)** Note that this type defines an endofunctor on ``Set``::
ΣState-map : ∀{V W} → (V → W) → ΣState V → ΣState W
ΣState-map f s = {!!}
ΣState-map-id : ∀{V} (x : ΣState V) →
ΣState-map (λ xs → xs) x ≡ x
ΣState-map-id = {!!}
ΣState-map-∘ : ∀{U V W} (f : U → V)(g : V → W)(x : ΣState U) →
ΣState-map (g ∘ f) x ≡ ΣState-map g (ΣState-map f x)
ΣState-map-∘ = {!!}
**Remark:** an equivalent (but more modular) way to obtain the same
signature is through the following constructs::
data ΣGet (X : Set) : Set where
`get : ⊤ × (S → X) → ΣGet X
data ΣSet (X : Set) : Set where
`set : S × (⊤ → X) → ΣSet X
data _⊕_ (F G : Set → Set)(X : Set) : Set where
inl : F X → (F ⊕ G) X
inr : G X → (F ⊕ G) X
which gives ``ΣState ≡ ΣGet ⊕ ΣSet``.
It is too early to follow that path but we shall bear in mind this
more elementary decomposition.
**Exercise (difficulty: 5)** What is the minimal language for describing
such signatures? We shall make sure that these signature are always
functorial, for reasons that will become evident in the
following. Generalize the constructions presented in these lecture
notes using that language.
Free term algebra for State
From a signature, we can build a *syntax* for writing stateful
programs: we just need to combine 'get's, 'set's and pure computations
('return'). The resulting syntactic object is easily described by an
inductive type::
data StateF (V : Set) : Set where
return : V → StateF V
op : ΣState (StateF V) → StateF V
In this (very small) language, we have two smart constructors, ``get``
and ``set``, whose definition can be `automatically derived from the
signature <>`_::
get : ⊤ → StateF S
get tt = op (`get (tt , λ s → return s))
**Exercise (difficulty: 1)** Implement `set`::
set : S → StateF ⊤
set s = {!!}
Note that the type of these operations is exactly what we expect in,
say, OCaml modulo the presence of ``StateF``. It is useful to think of
``StateF`` as a modality on the arrow type, documenting what effects
the function may perform (aside from computing).
**Exercise (difficulty: 3)** thinking of ``V`` as a set of variables,
``StateF V`` denotes stateful computations with variables in ``V``. By
exploiting the functoriality of ``ΣState``, we can implement a form of
*composition* (some may say *sequencing*!) of stateful
programs. Formally, we have that ``StateF`` is a monad (the `free
_>>=_ : ∀{V W} → StateF V → (V → StateF W) → StateF W
sv >>= mf = {!!}
If one thinks of ``V`` and ``W`` as sets of variables, then ``>>=``
(pronounced **bind**) can be thought as implementing a simultaneous
substitution. One can also think of these objects as trees (*ie.*
syntax trees) terminated by pure values of type ``V``, to which one
grafts trees terminated by pure values of type ``W``. Both intuitions
are useful.
Exercise (difficulty: 3): Rewrite ``>>=`` in such a way that Agda is
able to check that it is indeed terminating. Hint: use a pair of
mutually recursive functions.
**Remark** there is nothing special about ``StateF``: given any
(well-behaved) endofunctor ``F : Set → Set``, we can build another
functor ``Free F : Set → Set`` which happens to be a monad: this is
the `free monad`_ construction which provides, for free, the
substitution ``>>=``. Free monads seem to provide an infinite source
of blog posts and Haskell packages, here are a few:
**Remark** from a categorical perspective, it is a bit improper to call
``StateF`` the "free monad": as we shall see, category theorists
expect some form of quotienting over the terms with have built. Here,
we just have a lump of syntax. Rather than "free monad", we shoud
favor the notion of "free term algebra".
At this stage, we can write (but not execute!) stateful programs, such
test0 : StateF S
test0 = get tt >>= λ s →
set s >>= λ _ →
get tt >>= λ s' →
return s'
test1 : StateF S
test1 = get tt >>= λ s' →
return s'
test2 : StateF S
test2 = get tt >>= λ s →
set s >>= λ _ →
return s
random : StateF ℕ
random = get tt >>= λ seed →
let n = toℕ ((seed *ℕ 25173 + 1725) mod 65536) in
set n >>= λ _ →
return n
Monad laws
We have equipped the datatype ``StateF`` with quite a bit of
*structure*. Before delving further into the the specifics of stateful
computations, we are going to prove 3 general results, the *monad
laws*, which we expect to hold for any such structure, irrespectively
of its particular semantics.
The monadic laws specify the interaction between ``return`` -- which
brings pure values into stateful programs -- and ``_>>=_`` -- which
applies stateful functions.
module Exercise-bind-left-unit where
**Exercise (difficulty: 1)** the first law states that applying a
stateful program to a pure value amounts to performing a standard
function application or, put otherwise, ``return`` is a left unit for
bind-left-unit : ∀ {X Y} → (x : X)(k : X -> StateF Y) →
(return x >>= k) ≡ k x
bind-left-unit x k = {!!}
module Exercise-bind-right-unit where
**Exercise (difficulty: 4)** the second law states that returning a
stateful value amounts to giving the stateful computation itself or,
put otherwise, ``return`` is a right unit for ``_>>=_``::
bind-right-unit : ∀ {X} → (mx : StateF X) →
mx >>= return ≡ mx
bind-right-unit = {!!}
where postulate ext : Extensionality
This exercise is artificially difficult because of the need to
convince Agda's termination checker. One should feel free to convince
oneself of the termination of the straightforward definition instead
of fighting the termination checker. We will also need to postulate
functional extensionality.
module Exercise-bind-compose where
**Exercise (difficult: 2)** finally, the third law states that we
can always parenthesize ``_>>=_`` from left to right or, put
otherwise, ``_>>=`` is associative::
bind-compose : ∀ {X Y Z} → (mx : StateF X)(f : X -> StateF Y)(g : Y -> StateF Z) →
((mx >>= f) >>= g) ≡ (mx >>= λ x → (f x >>= g))
bind-compose = {!!}
where postulate ext : Extensionality
There is a familiar object that offers a similar interface: (pure)
function! For which ``_>>=_`` amounts to composition and ``return`` is
the identity function. Monads can be understood as offering "enhanced"
functions, presenting a suitable notion of composition and identity
*as well as* effectful operations. For the programmer, this means that
we have ``let _ = _ in _`` for pure functions and ``_>>=_`` for
effectful functions, both subject to (morally) the same laws of
function composition.
Equational theory of State
Intuitively, ``test0``, ``test1`` and ``test2`` denote the same
program. This section aims at stating this formally.
To do so, we equip our syntax with an equational theory. That is, we
need to specify which kind of identities should hold on stateful
programs. Or, put otherwise and following an operational approach, we
relationally specify the reduction behavior of ``StateF``, seen as an
embedded language. We want::
data _↝_ {V : Set} : StateF V → StateF V → Set where
get-get : ∀{k : S → S → StateF V} →
(get tt >>= (λ s → get tt >>= λ s' → k s s' )) ↝ (get tt >>= λ s → k s s )
set-set : ∀{k s₁ s₂} →
(set s₁ >>= (λ _ → set s₂ >>= λ _ → k)) ↝ (set s₂ >>= λ _ → k)
get-set : ∀{k} →
(get tt >>= λ s → set s >>= λ _ → k) ↝ k
set-get : ∀{k s} →
(set s >>= (λ _ → get tt >>= k)) ↝ (set s >>= λ _ → k s)
In English, this amounts to the following rules:
* rule ``get_get``: getting the current state twice is equivalent to getting it
only once
* rule ``set_set``: setting the state twice is equivalent to performing only the
last 'set'
* rule ``get-set``: getting the current state and setting it back in is equivalent to
doing nothing
* rule ``set-get``: setting the state then getting its value is equivalent to setting
the state and directly moving on with that value
**Remark** where do these equations come from? Quite frankly, I
took them from `Matija Pretnar's PhD thesis`_. Paul-André Melliès
would start from a minimal set of equations and run `Knuth-Bendix
completion algorithm`_ to find a confluent equational theory/term
rewriting system.
**Remark** coming from a mathematical background, one may understand
this formalism as a generalization of algebraic structures such as
monoids, groups, etc.:
- we start with a signature of operations, such as "there is
a unary symbol ``1`` and a binary symbol ``.``".
- then, we give a set of axioms equating open terms, such as
``(a . b) . c = a . (b . c)``, ``1 . a = a``, and ``a . 1 = a``.
From local equations, we easily build its congruence closure (includes
``↝``, transitive, reflexive, symmetric, and lift from subterms to
data _∼_ {V : Set} : StateF V → StateF V → Set₁ where
inc : ∀{p q} → p ↝ q → p ∼ q
trans : ∀{p q r} → p ∼ q → q ∼ r → p ∼ r
refl : ∀{p} → p ∼ p
sym : ∀{p q} → p ∼ q → q ∼ p
cong : ∀{W}(tm : StateF W){ps qs : W → StateF V} →
(∀ w → ps w ∼ qs w) →
(tm >>= ps) ∼ (tm >>= qs)
To reason up to this equivalence relation, we can state that elements
of a set ``V`` should be considered up to ``~``: this defines a
so-called (and dreaded) `setoid`_::
setoid : Set → Setoid _ _
setoid V = record
{ Carrier = StateF V
; _≈_ = _∼_
; isEquivalence = isEquivalence
where isEquivalence : ∀ {V : Set} → IsEquivalence (_∼_ {V = V})
isEquivalence = record
{ refl = refl
; sym = sym
; trans = trans
module Exercise-equiv-prog12 where
**Exercise (difficulty: 1 or 5)** we can now formally reason about the
equivalence of programs. This is not only of formal interest, this is
also at the heart of compiler optimizations, code refactoring, etc.::
prog1 : StateF ℕ
prog1 =
get tt >>= λ x →
set (1 + x) >>= λ _ →
get tt >>= λ y →
set (2 + x) >>= λ _ →
get tt >>= λ z →
set (3 + y) >>= λ _ →
return y
prog2 : StateF ℕ
prog2 =
get tt >>= λ x →
set (4 + x) >>= λ _ →
return (1 + x)
prog-equiv : prog1 ∼ prog2
prog-equiv = {!!}
Semantics: ``State ≡ StateF/∼``
Lawvere theory tells us that if we were to *quotient* the term algebra
``StateF`` with the equivalence relation ``∼``, we would obtain a
monad, the ``State`` monad. If you are familiar with Haskell, you
already know a State monad, which is usually defined as ``S → S × V``
to represent stateful computations using a single memory reference of
sort ``S`` and returning a result of sort ``V``.
However, in type theory (and in programming in general) quotienting
must be engineered. After thinking very hard, one realizes that every
term of ``StateF`` quotiented by ``∼`` will start with a ``get``,
followed by a ``set``, concluded with a ``return``. We thus expect the
following normal form::
State : Set → Set
State V = ΣGet (ΣSet V)
Unfolding the definition of ``ΣGet`` and ``ΣSet``, we realize that
this type is in fact isomorphic to ``S → S × V``: we have recovered
Haskell's ``State`` monad::
STATE : Set → Set
STATE V = S → S × V
It remains to substantiate this claim that *every* stateful program is
equivalent to a ``get`` followed by a ``set``. In the great tradition
of constructive mathematics, we should do so computationally, thus
inheriting a program computing these normal forms (also known as an
evaluator) as well as a proof that this program is correct. We eschew
to a technique called `normalization-by-evaluation`_, with is spicy
hot Curry-Howard in action.
**Exercise (difficulty: 2)** the first step is to interpret stateful
terms into a suitable semantic domain which is **extensionally**
quotiented by the theory of State::
eval : ∀{A} → StateF A → STATE A
eval = {!!}
This function should satisfy the following unit-proofs::
test-eval-get : ∀ {A} tt (k : S → StateF A) s →
eval (get tt >>= k) s ≡ eval (k s) s
test-eval-get = {!!}
test-eval-set : ∀ {A} (k : ⊤ → StateF A) s s' →
eval (set s' >>= k) s ≡ eval (k tt) s'
test-eval-set = {!!}
**Exercise (difficulty: 1)** the second step consists in *reifying*
the semantic objects into the desired normal forms::
reify : ∀{A} → STATE A → State A
reify f = {!!}
The normalization procedure thus genuinely computes the normal form::
norm : ∀{A} → StateF A → State A
norm p = reify (eval p)
and these normal forms are indeed a subset of terms::
⌈_⌉ : ∀{A} → State A → StateF A
⌈ `get (tt , k) ⌉ = get tt >>= λ s → help (k s)
where help : ∀ {A} → ΣSet A → StateF A
help (`set (s , k)) = set s >>= λ _ → return (k tt)
Interpreting the statement *"for every stateful program, there exists a
normal form"* constructively means that we have a procedure for
computing this normal form. This is precisely the ``norm`` function.
Monads strike back
Looking closely at the ``eval`` function, we notice that we *map*
syntactic objects -- of type ``StateF A`` -- to semantics objects --
of type ``STATE A``. The natural question to ask is whether all the
structure defined over ``StateF A`` carries over to ``STATE A``,
ie. is there a semantical counterpart to ``return``, ``get``, ``set``
and ``_>>=_``?
module Exercise-sem-monad where
**Exercise (difficult: 1)** guided by ``eval``, implement the
semantical counterparts of ``return``, ``get`` and ``set``::
sem-return : ∀{A} → A → STATE A
sem-return a = {!!}
sem-get : ⊤ → STATE S
sem-get tt = {!!}
sem-set : S → STATE ⊤
sem-set s = {!!}
Unit-proof your definition with respect to their syntactic
test-sem-return : ∀ {X}{x : X} → eval (return x) ≡ sem-return x
test-sem-return = {!!}
test-sem-get : ∀{s} → eval (get tt) s ≡ sem-get tt s
test-sem-get = {!!}
test-sem-set : ∀{s s'} → eval (set s') s ≡ sem-set s' s
test-sem-set = {!!}
module Exercise-sem-bind where
**Exercise (difficulty: 2)** similarly, there is a ``_>>=_`` over
semantical states::
_sem->>=_ : ∀ {X Y} → (mx : STATE X)(k : X -> STATE Y) → STATE Y
_sem->>=_ mx k = {!!}
whose unit-proof is::
test-eval-compose : ∀ {X Y} (mx : StateF X)(k : X -> StateF Y) (s : S) →
eval (mx >>= k) s ≡ (eval mx sem->>= λ x → eval (k x)) s
test-eval-compose = {!!}
In conclusion, we have been able to transport *all* the syntactic
structure of ``StateF X`` to ``STATE X``. In fact, we could be so bold
as to directly work in ``STATE X``, ignoring ``StateF`` altogether:
this is what most purely functional programmers do currently.
Soundness & Completeness
Now, we must prove that a term thus computed is indeed a normal
form. This is captured by two statement, a *soundness* result and a
*completeness* result.
**Exercise (difficulty: 4)** at first, we assume the following two
lemmas (whose proof is left as an exercise)::
pf-sound : ∀{A} → (p : StateF A) → p ∼ ⌈ norm p ⌉
pf-sound = {!!}
pf-complete : ∀ {A} {p q : StateF A} → p ∼ q → ∀{s} → eval p s ≡ eval q s
pf-complete = {!!}
so as to focus on the overall architecture of the proof.
First, ``norm`` is sound: if two terms have the same normal form, they
belong to the same congruence class::
sound : ∀ {V} (p q : StateF V) → ⌈ norm p ⌉ ≡ ⌈ norm q ⌉ → p ∼ q
sound {V} p q r =
≈⟨ pf-sound p ⟩
⌈ norm p ⌉
≡⟨ r ⟩
⌈ norm q ⌉
≈⟨ sym (pf-sound q) ⟩
where open import Relation.Binary.EqReasoning (setoid V)
Second, ``norm`` is complete: if two terms belong to the same congruence
class, they have the same normal form::
complete : ∀ {A} {p q : StateF A} → p ∼ q → ⌈ norm p ⌉ ≡ ⌈ norm q ⌉
complete {p = p} {q} r =
⌈ norm p ⌉
≡⟨ refl ⟩
⌈ reify (eval p) ⌉
≡⟨ cong≡ (λ x → ⌈ reify x ⌉) (ext (λ z → pf-complete r)) ⟩
⌈ reify (eval q) ⌉
≡⟨ refl ⟩
⌈ norm q ⌉
where open ≡-Reasoning
postulate ext : Extensionality
Note that this last proof needs functional extensionality (which, in
Agda, is an axiom that does not compute). This is not a problem here
since we are building a proof, whose computational content is void (it
is entirely contained in the ``norm`` function).
From a programming perspective, ``norm`` gives us an interpreter for