Lecture 3 goes live

parent 9b3babff
..
::
{-# OPTIONS --allow-unsolved-metas --rewriting #-}
open import Level renaming (zero to 0ℓ ; suc to sucℓ)
open import Data.Empty
open import Data.Unit hiding (_≤_ ; _≤?_)
open import Data.Bool
open import Data.Maybe hiding (map) renaming (monad to monad-Maybe)
open import Data.Product hiding (map)
open import Data.Sum hiding (map)
open import Data.Nat
open import Data.Fin hiding (_+_ ; _≤_ ; _<_ ; _-_ ; pred ; _≤?_)
open import Data.Vec hiding (_>>=_ ; _++_)
open import Function hiding (id)
open import Relation.Nullary
open import Relation.Nullary.Decidable hiding (map)
open import Relation.Binary
open import Relation.Binary.PropositionalEquality
open import Category.Monad
module 03-total.Recursion where
{-# BUILTIN REWRITE _≡_ #-}
-- being lazy in the implementation of `pick1`
addZ : ∀ x → x + 0 ≡ x
addZ zero = refl
addZ (suc x) = cong suc (addZ x )
addS : ∀ x {y} → x + (suc y) ≡ suc (x + y)
addS zero = refl
addS (suc x) = cong suc (addS x )
{-# REWRITE addS addZ #-}
================================================================
Total functional programming
================================================================
Last week:
- use indices to garantee totality
- definitions by induction over (faithful) syntax
- one shade of salmon: structural substitution
Today, we shall:
- study non-obviously terminating programs
- argue about termination *against* Agda's termination checker
- argue about termination *with* Agda's typechecker
Outline:
#. First-order unification: using indices to provide order
#. Proof search: using induction to witness termination
#. Bove-Capretta: using monads to postpone The Talk about termination
The vision: `Total Functional Programming`_
- zealously pursued with dependent types by `Conor McBride <http://strictlypositive.org/>`_
- at the origin of *all* of today's examples
************************************************
First-order Unification
************************************************
..
::
module UnifNaive where
open import Data.Maybe
open import Category.Monad
open RawMonadZero {0ℓ} Data.Maybe.monadZero
In this first example, we set out to implement first-order
unification. We first implement/specify the problem assuming general
recursion and then set out to re-implement it in a structurally
recursive manner. This presentation was given by McBride in
`First-order unification by structural recursion`_.
--------------------------------
Specification: terms
--------------------------------
We study a simple syntax of terms with variables ``var`` (represented
by natural numbers), a nullary constructor ``leaf`` and a binary
constructor ``fork``::
Var = ℕ
data Term : Set where
var : (i : Var) → Term
leaf : Term
fork : (s t : Term) → Term
One can easily generalize this to any term signature but this would
needlessly pollute our definitions. Crucially, all the definitions
below will behave *functorially* over ``leaf`` and ``fork``.
Indeed, ``Term`` is a free term algebra! It therefore comes with a
simultaneous substitution::
sub : (Var → Term) → Term → Term
sub ρ (var i) = ρ i
sub ρ leaf = leaf
sub ρ (fork s t) = fork (sub ρ s) (sub ρ t)
_∘K_ : (Var → Term) → (Var → Term) → Var → Term
ρ₁ ∘K ρ₂ = λ k → sub ρ₁ (ρ₂ k)
In the first lecture, the function ``sub`` was called ``bind`` but it
is otherwise exactly the same thing.
--------------------------------
Specification: occur-check
--------------------------------
For a variable ``i : Var`` and a term ``t : Term``, there are two cases for the occur-check ``check i t``:
- either ``i`` occurs in ``t``,
- or ``i`` does not occur in ``t``
In the latter case, the term can be made "smaller" by noticing that,
since ``i`` does not occur in ``t``, we can rename every variables ``j
> i`` to ``j - 1`` while leaving all ``j < i`` unmodified.
To this end, we implement a test-and-reduce operator ``i Δ j`` on
variables ``i, j : Var`` which raises if the variables are equal (which
will be interpreted as a successful occur-check) and returns the
renamed ``j`` otherwise::
_Δ_ : Var → Var → Maybe Var
zero Δ zero = ∅
zero Δ suc y = return y
suc x Δ zero = return zero
suc x Δ suc y = x Δ y >>= λ y' →
return (suc y')
The occur-check consists then simply in applying ``Δ`` to every
variable of the given term, unless ``i`` actually occurs in which case
it will raise::
check : Var → Term → Maybe Term
check i (var j) = i Δ j >>= λ z →
return (var z)
check i leaf = return leaf
check i (fork f s) = check i f >>= λ r1 →
check i s >>= λ r2 →
return (fork r1 r2)
When ``check i t`` succeeds in producing a term ``t'``, we thus now
that ``i`` did not occur in ``t`` and that all variables above ``i``
have been lowered by 1 in ``t'``.
--------------------------------
Specification: substitution
--------------------------------
We have seen that terms are endowed with a substitution operator that
expects a function of type ``Var → Term``. The role of first-order
unification is to compute such a substitution. However, the kind of
objects we will be building contains much more structure that the
rather bland function space ``Var → Term``.
So, as usual, we make a detour through a precise and inductive
characterization of the mapping from terms to variables that we shall
consider::
data Subst : Set where
id : Subst
_∷[_/_] : (σ : Subst)(t : Term)(i : Var) → Subst
In turn, we interpret this initial model in the target
one. Substituting a single term ``t : Term`` for a variable ``i : Var``
amounts to a substitution that returns ``t`` if ``i ≟ j``, and the
remainder of ``j`` by ``i`` otherwise::
_for_ : Term → Var → (Var → Term)
(t for i) j with i Δ j
... | nothing = t
... | just j' = var j'
The interpretation of a list of single substitutions is merely
function composition::
⟦_⟧ : Subst → (Var → Term)
⟦ id ⟧ = var
⟦ ρ ∷[ t / i ] ⟧ = ⟦ ρ ⟧ ∘K (t for i)
-----------------------------------
Specification: most-general unifier
-----------------------------------
The computation of the most-general unifier works by accumulating a
substitution as it explores matching subterms (case ``amgu (fork s₁
t₁) (fork s₂ t₂)``) and then discharging that substitution (case
``amgu s t (σ ∷[ r / z ])``). Variables are only considered under no
substitution (cases ``amgu _ _ id``), in which case we must either
solve a flex-flex problem or a flex-rigid problem::
flex-flex : (x y : Var) → Subst
flex-rigid : (x : Var)(t : Term) → Maybe Subst
{-# TERMINATING #-}
amgu : (s t : Term)(acc : Subst) → Maybe Subst
-- Conflicts:
amgu leaf (fork _ _) _ = ∅
amgu (fork _ _) leaf _ = ∅
-- Matches:
amgu leaf leaf acc = return acc
amgu (fork s₁ t₁) (fork s₂ t₂) acc = amgu s₁ s₂ acc >>= λ acc →
amgu t₁ t₂ acc
-- Variables flex-flex:
amgu (var x) (var y) id = return (flex-flex x y)
-- Variables flex-rigid:
amgu (var x) t id = flex-rigid x t
amgu t (var x) id = flex-rigid x t
-- Terms under substitution:
amgu s t (σ ∷[ r / z ]) = amgu (sub (r for z) s)
(sub (r for z) t) σ >>= λ σ →
return (σ ∷[ r / z ])
flex-flex x y with x Δ y
... | just y' = id ∷[ var y' / x ]
... | nothing = id
flex-rigid x t = check x t >>= λ t' →
return (id ∷[ t' / x ])
mgu : (s t : Term) → Maybe Subst
mgu s t = amgu s t id
..
::
v₀ v₁ v₂ v₃ : Term
v₀ = var 0
v₁ = var 1
v₂ = var 2
v₃ = var 3
Assuming that the above definition is terminating, we can test it on a
few examples::
test₁ : mgu (fork v₀ leaf) (fork (fork leaf leaf) v₁)
≡ just (id ∷[ leaf / 0 ] ∷[ (fork leaf leaf) / 0 ])
test₁ = refl
test₂ : mgu (fork v₀ leaf) (fork (fork leaf leaf) v₃)
≡ just (id ∷[ leaf / 2 ] ∷[ (fork leaf leaf) / 0 ])
test₂ = refl
test₃ : mgu v₀ (fork leaf v₀)
≡ nothing
test₃ = refl
test₄ : mgu (fork v₀ leaf) (fork (fork leaf leaf) v₀)
≡ nothing
test₄ = refl
test₅ : mgu (fork v₀ v₁) (fork (fork leaf v₁) (fork leaf leaf))
≡ just (id ∷[ fork leaf leaf / 0 ] ∷[ fork leaf (var zero) / 0 ])
test₅ = refl
--------------------------------
Structurally: terms
--------------------------------
..
::
module Unif where
open import Category.Monad
open RawMonadZero {0ℓ} Data.Maybe.monadZero
As it stands, we will have a hard time convincing Agda that this
implementation is indeed terminating: the terms grow as substitutions
are discharged while the accumulated substitution itself grows as
flex-rigid are solved.
Part of the problem stands in the fact that, whilst we have the
intuition that the numbers of variables occuring in terms keeps
decreasing as unification proceeds, this intuition is not documented
in the code. Let us try again, using indexing as a machine-checked
mode of documentation.
We now stratify the set of variables, ie. ``Var n`` contains ``n``
distinct variables::
Var : ℕ → Set
Var = Fin
We can thus represent *terms with (at most) ``n`` variables*::
data Term (n : ℕ) : Set where
var : (i : Var n) → Term n
leaf : Term n
fork : (s t : Term n) → Term n
**Exercise (difficulty: 1)** Once again, we can implement
substitution::
sub : ∀ {m n} → (Var m → Term n) → Term m → Term n
sub ρ t = {!!}
_∘K_ : ∀ {m n l} → (Var m → Term n) → (Var l → Term m) → Var l → Term n
ρ₁ ∘K ρ₂ = {!!}
**Exercise (difficulty: 1)** Implement the (obvious) renaming
operation::
ren : ∀ {m n} → (Var m → Var n) → Term m → Term n
ren σ t = {!!}
**Remark:** Two substitutions are equal if they are equal pointwise::
_≐_ : ∀ {m n} → (f g : Var m → Term n) → Set
f ≐ g = ∀ x → f x ≡ g x
--------------------------------
Structurally: variable extrusion
--------------------------------
Variable comparison becomes more informative for Agda since we can
witness in the return type that the variable ``y`` was definitely
distinct from ``x`` and, therefore, belongs to a strictly smaller
class of variables::
_Δ_ : ∀ {n} → Var (suc n) → Var (suc n) → Maybe (Var n)
zero Δ zero = ∅
zero Δ suc y = return y
_Δ_ {zero} (suc ())
_Δ_ {suc _} (suc x) zero = return zero
_Δ_ {suc _} (suc x) (suc y) = x Δ y >>= λ y' →
return (suc y')
..
::
module Exercise-inj where
**Exercise (difficulty: 1)** The operation ``Δ`` can be understood as
the partial inverse of the following injection from ``Var n`` to ``Var
(suc n)`` which adds ``i`` to the variables in ``Var n``::
inj[_] : ∀ {n} → (i : Var (suc n)) → Var n → Var (suc n)
inj[ zero ] y = suc y
inj[ suc x ] zero = zero
inj[ suc x ] (suc y) = suc (inj[ x ] y)
Prove the following lemmas, the last being one way to state that
``inj[_]`` is the partial inverse of ``Δ``::
lemma-inj1 : ∀ {n} x y z → inj[_] {n} x y ≡ inj[_] x z → y ≡ z
lemma-inj1 = {!!}
lemma-inj2 : ∀ {n} x y → inj[_] {n} x y ≢ x
lemma-inj2 = {!!}
lemma-inj3 : ∀ {n} x y → x ≢ y → ∃ λ y' → inj[_] {n} x y' ≡ y
lemma-inj3 = {!!}
lemma-inj-Δ : ∀ {n}(x y : Var (suc n))(r : Maybe (Var n)) →
x Δ y ≡ r → ((y ≡ x × r ≡ nothing) ⊎ (∃ λ y' → y ≡ inj[ x ] y' × r ≡ just y'))
lemma-inj-Δ = {!!}
Another way to construct ``Δ`` is to obtain it as a view (``inj-view``
is essentially a proof-carrying version of ``Δ``)::
data inj-View {n}(i : Var (suc n)) : Var (suc n) → Set where
just : (k : Var n) → inj-View i (inj[ i ] k)
eq : inj-View i i
inj-view : ∀ {n}(i : Var (suc n))(j : Var (suc n)) → inj-View i j
inj-view i j = {!!}
..
::
open Exercise-inj
--------------------------------
Structurally: occur-check
--------------------------------
Following ``Δ``, the occur-check reflects the fact that, in case of
success, the resulting term did not use one variable::
check : ∀ {n} → (i : Var (suc n))(t : Term (suc n)) → Maybe (Term n)
check i (var j) = i Δ j >>= λ k →
return (var k)
check i leaf = return leaf
check i (fork f s) = check i f >>= λ r1 →
check i s >>= λ r2 →
return (fork r1 r2)
..
::
module Exercise-check where
If we were able to extrude ``x`` from ``t`` into ``t'``, this means
that injecting ``x`` into ``t'`` amounts to the exact same term
``t``::
lemma-check : ∀ {n} x t {t'} → check {n} x t ≡ just t' → ren (inj[ x ]) t' ≡ t
lemma-check x y p = {!!}
..
::
open Exercise-check
--------------------------------------
Structurally: single term substitution
--------------------------------------
Crucially, a (single) substitution ensures that a variable denotes a
term with one less variable::
_for_ : ∀ {n} → Term n → Var (suc n) → (Var (suc n) → Term n)
(t' for x) y with x Δ y
... | just y' = var y'
... | nothing = t'
..
::
module Exercise-for where
The composition of ``_for_`` and ``inj[_]`` amounts to an identity::
lemma-for-inj : ∀ {n} (t : Term n) x → ((t for x) ∘ (inj[ x ])) ≐ var
lemma-for-inj = {!!}
lemma-check-inj : ∀ {n} x t t' → check {n} x t ≡ just t' →
sub (t' for x) t ≡ sub (t' for x) (var x)
lemma-check-inj = {!!}
..
::
open Exercise-for
--------------------------------------
Structurally: substitution
--------------------------------------
Iteratively, a substitution counts the upper-bound of variables::
data Subst : ℕ → ℕ → Set where
id : ∀ {n} → Subst n n
_∷[_/_] : ∀ {m n} → (σ : Subst m n)(t' : Term m)(x : Var (suc m)) → Subst (suc m) n
⟦_⟧ : ∀ {m n} → Subst m n → (Var m → Term n)
⟦_⟧ id = var
⟦_⟧ (ρ ∷[ t' / x ]) = ⟦ ρ ⟧ ∘K (t' for x)
..
::
module Exercise-Subst where
**Exercise (difficulty: 1)** Implement composition on the inductive
characterization of substitutions and show that it corresponds to the
underlying composition of substitutions::
_∘A_ : ∀ {l m n} → Subst m n → Subst l m → Subst l n
ρ ∘A σ = {!!}
lemma-comp : ∀ {l m n} (ρ : Subst m n)(σ : Subst l m) → ⟦ ρ ∘A σ ⟧ ≡ ⟦ ρ ⟧ ∘K ⟦ σ ⟧
lemma-comp = {!!}
--------------------------------------
Structurally: most-general unifier
--------------------------------------
The implementation of the most-general unifier is exactly the same,
excepted that termination has become self-evident: when performing the
substitution (case ``amgu {suc k} _ _ (m , (σ ∷[ r / z ]))``), the
next call to ``amgu`` will be on terms with ``k < suc k``
variables. It is therefore definable by structural recursion and Agda
is able to spot it::
flex-flex : ∀ {m} → (x y : Var m) → ∃ (Subst m)
flex-rigid : ∀ {m} → (x : Var m)(t : Term m) → Maybe (∃ (Subst m))
amgu : ∀ {m} → (s t : Term m)(acc : ∃ (Subst m)) → Maybe (∃ (Subst m))
-- Conflicts:
amgu leaf (fork _ _) _ = ∅
amgu (fork _ _) leaf _ = ∅
-- Matches:
amgu leaf leaf acc = return acc
amgu (fork s₁ t₁) (fork s₂ t₂) acc = amgu s₁ s₂ acc >>= λ acc →
amgu t₁ t₂ acc
-- Variables flex-flex:
amgu (var x) (var y) (m , id) = return (flex-flex x y)
-- Variables flex-rigid:
amgu (var x) t (m , id) = flex-rigid x t
amgu t (var x) (m , id) = flex-rigid x t
-- Terms under substitution:
amgu {suc k} s t (m , (σ ∷[ r / z ])) =
amgu {k} (sub (r for z) s)
(sub (r for z) t) (m , σ) >>= λ { (n , σ) →
return ((n , σ ∷[ r / z ])) }
flex-flex {zero} ()
flex-flex {suc _} x y with x Δ y
... | just y' = , id ∷[ var y' / x ]
... | nothing = , id
flex-rigid {0} ()
flex-rigid {suc _} x t = check x t >>= λ t' →
return (, id ∷[ t' / x ])
mgu : ∀ {m} → (s t : Term m) → Maybe (∃ (Subst m))
mgu s t = amgu s t (, id)
The key idea was thus to reify the (decreasing) *measure* as an
indexing discipline. Our implementation was then naturally defined
structurally over this index, thus yielding a structurally acceptable
definition.
**Exercise (difficulty: 3)** Prove the *soundness* of your
implementation: the substitution thus computed is indeed a valid
unifier. The lemmas left as exercises will be useful there.
**Exercise (difficulty: 5)** Prove the *completeness* if your
implementation: the substitution thus computed is indeed the most
general one. You may want to invest into some `archaeological
investigation
<http://www.strictlypositive.org/foubsr-website/unif.l>`_ or have a
look at the literature such as, for example, `Type inference in
context`_.
************************************************
Proof search
************************************************
In this second example, we study a decision procedure studied by Roy
Dyckhoff in `Contraction-free sequent calculi for intuitionistic
logic`_ and turned into type theory by Conor McBride in `Djinn,
monotonic`_.
--------------------------------------
Specification
--------------------------------------
..
::
module DjinnNaive (A : Set)(_≟_ : Decidable {A = A} _≡_) where
open import Data.List
open import Data.Vec hiding (_++_)
infixr 70 _⊃_
Bwd : Set → Set
Bwd A = List A
pattern _▹_ xs x = x ∷ xs
pattern ε = []
Fwd : Set → Set
Fwd A = List A
pattern _◃_ x xs = x ∷ xs
We consider the purely negative fragment of propositional logic::
data Formula : Set where
Atom : (a : A) → Formula
_⊃_ : (P Q : Formula) → Formula
The decision procedure checks whether a Formula (in a context) is
true. This amounts to implementing a traditional focusing presentation
of the sequent calculus::
{-# TERMINATING #-}
_⊢_ : List Formula → Formula → Bool
_[_]⊢_ : List Formula → Formula → A → Bool
_><_⊢ax_ : Bwd Formula → Fwd Formula → A → Bool
Γ ⊢ P ⊃ Q = (Γ ▹ P) ⊢ Q
Γ ⊢ Atom a = ε >< Γ ⊢ax a
Δ >< ε ⊢ax α = false
Δ >< (P ◃ Γ) ⊢ax α = (Δ ++ Γ) [ P ]⊢ α
∨ (Δ ▹ P) >< Γ ⊢ax α
Γ [ Atom α ]⊢ β = ⌊ α ≟ β ⌋
Γ [ P ⊃ Q ]⊢ α = Γ [ Q ]⊢ α ∧ Γ ⊢ P
⊢_ : Formula → Bool
⊢ P = [] ⊢ P
This definition is terminating but not obviously so. The crux of the
matter is in ``_><_⊢ax_``, which reduces the context on one hand (call
``(Δ ++ Γ) [ P ]⊢ α``) while ``_⊢_`` called from ``_[_]⊢_`` will
augment the context.
..
::
module TestNaive where
open DjinnNaive ℕ Data.Nat._≟_
A = Atom 0
B = Atom 1
∐ = Atom 2
Here are a few tests::
test₁ : ⊢ A ⊃ B ⊃ A ≡ true
test₁ = refl
test₂ : ⊢ A ⊃ B ≡ false
test₂ = refl
CPS : Formula → Formula
CPS A = (A ⊃ ∐) ⊃ ∐
return : ⊢ A ⊃ CPS A ≡ true
return = refl
bind : ⊢ CPS A ⊃ (A ⊃ CPS B) ⊃ CPS B ≡ true
bind = refl
call-cc : ⊢ ((A ⊃ CPS B) ⊃ CPS A) ⊃ CPS A ≡ true
call-cc = refl
--------------------------------------
Structural search
--------------------------------------
..
::
module DjinnStructural (A : Set)(_≟_ : Decidable {A = A} _≡_) where
open import Data.Vec
open DjinnNaive hiding (Formula ; _⊢_ ; _[_]⊢_ ; ⊢_) public
infix 60 _/_⊢_
infix 60 _/_[_]⊢_
Following the lesson from the first part, we turn the ordering, which
justifies our definition, into an indexing discipline. Despite the
fact that the context shrinks then grows, an important observation is
that, when a formula is taken out of the context, the formuli that may
be subsequently inserted are necessarily its premises, of *strictly
lower order*. We thus capture the (upper-bound) order of formuli by a
suitable indexing strategy::
data Formula : ℕ → Set where
Atom : ∀ {n} → (a : A) → Formula n
_⊃_ : ∀ {n} → (P : Formula n)(Q : Formula (suc n)) → Formula (suc n)
The representation of context also needs to be stratified, so that
formulis come up sorted along their respective order::
Bucket : Set → Set
Bucket X = Σ[ n ∈ ℕ ] (Vec X n)
<