(redirected from Main.K-rule)
While it is very interesting to investigate the power of inductive families, I am not sure if inductive families should be in the official version of Agda, and should not be instead only in an experimental version.
The two main reasons are
- most examples that are done now with inductive families can be done without
- the implementation of inductive families, with pattern-matching, involves delicate steps, among them an unification algorithm, which may not be so easy to explain to the user
Let us explain the first point by some examples. The first one
has been discussed
recently in the Coq mailing list (answer of Russell O'Connor). One wants
to define the subset of Nat
having only as elements 0
and 1
.
The definition with inductive families is
data Q : Nat -> Set where c0 : Q 0 c1 : Q 1
The direct way of defining it is
Q' : Nat -> Set = \ x -> x < 2
The definition via inductive families was prefered with the (wrong) argument that it gives for free the induction principle that for proving
foo : (x : Nat) -> Q x -> B
it is enough to define
foo 0 c0 : B(0) = p0 foo 1 c1 : B(1) = p1
As argued by Russell, this is not a good reason since it is possible to show
bar : (P : Nat -> Set) -> P 0 -> P 1 -> (x : Nat) -> Q' x -> P x
and so one can define simply (without inductive families)
foo : (x : Nat) -> Q' x -> B = bar (\ x -> B) p0 p1
The second example is the set of vectors. While it is sometimes defined as an inductive family, one can argue that the following definition is clearer
vec : Set -> Nat -> Set vec A 0 = N1 vec A (S n) = A x (vec A n)
See Conor McBride's examples for nice examples of what can be done with this definition.
There are some examples that seem to need the full power of inductive families.
But there are quite subtle. One of them is the fact that any set form a groupoid
for the Id
type. This needs the full strength of the elimination rule
(C : (x y : A) -> Id A x y -> Set) -> (( x : A) -> C x x (Ref x)) -> (x y : A ) -> (p : Id A x y) -> C x y p
while most of the examples would need only the non dependent version
(R : A -> A -> Set) -> (( x : A) -> R x x) -> (x y : A) -> Id A x y -> R x y
that expresses that Id A
is the least reflexive relation over A.
I think this point should be stressed in the presentation of Agda, that most examples don't need inductive families, are actually clearer without and that the importance of inductive families, involving the subtle dependent elimination, is still a topic of research.
The second point is that the unification algorithm in the pattern-matching is a little arbitrary (since it may involve in general terms that are not first-order). It allows also (too?) short proofs
f : (x : Nat) -> Id Nat x (S x) -> N0 f x ()
that may not be so easy to explain to the users.
Finally, it allows the derivation of Thomas Streicher's K
rule, of uniqueness
of identity proofs. As shown by Michael Hedberg, this uniqueness is actually "provable"
for sets over which the identity type is decidable (and this has been used elegantly
recently by Georges Gonthier for proofs over finite sets). To have this rule
provable means that we rule out from Agda the models that are described in the recent
survey paper of Steve Awodey and Michael A. Warren, which may be a mistake.
Maybe there is a natural restriction of the present pattern-matching
algorithm, which covers all the desired examples, but forbid the derivation
of the K
rule?
Examples which need inductive families
An example which I don't see how to do easily without inductive families is the representation of derivation systems. For instance the simply typed lambda calculus:
data Term (Γ : Ctx) : Type -> Set where var : Var Γ τ -> Term Γ τ app : Term Γ (σ → τ) -> Term Γ σ -> Term Γ τ lam : Term (Γ , σ) τ -> Term Γ (σ → τ)
The characteristic feature of this example is that the structure of the family is different from the structure of the indices. The vector type is easy to define recursively precisely because the structure of the a vector is the same as the structure of the natural number representing its length.
Some comments by Peter:
More examples which need inductive families
The simply typed lambda calculus example is just one of numerous examples which appear naturally in logic and computer science. Systems of axioms and inference rules, type systems in general, operational semantics, formal languages generated by context-free grammars, etc are all inductive families.
The principle of rule induction is often used in computer science.
Pure logic programs (sets of Horn clauses) translate immediately to inductive families.
Ana's method gives domain predicates which are inductive families.
All these examples have index sets which are first-order data types, and most of the constructors refer to types where the indices are constructor patterns. An important counterexample is however the definition of beta-reduction, where the contractum has an index which is not (it refers to the substitution function which is not a constructor):
beta : app (lam b) a --> subst b a
Is it better to use a "recursive family" rather than an inductive family whenever possible?
I agree with Thierry, that sometimes inductive families are used when a recursive family would be clearer, like in the vector example. However, which is better may depend on the situation and it seems nice to have both possibilities:
- the inductive family presents a "generative" view
- the recursive family presents a "reductive/computational" view
This distinction was useful in the work on combining testing and proving in Agda (by Qiao, Makoto, and me), where we used inductive types (including inductive families) for (randomly) generating correct inputs. Dually, it was necessary to have reductive/computational definitions of output properties.
---
A partial reply (Thierry)
Another example showing how much can be done without using inductive families (I think the little use there of the Id type is not even necessary) can be found here.
I would argue that for the next test example of use of Agda with AIST (CPU? verification?? I would like to know more about that) we try to see what can be done without inductive families.
My main worry is that inductive families complicate significantly the type-checking rules (if they are implement via pattern-matching). We need to make type-checking as simple as possible, and we need first to understand the simple case. As I tried to argue, inductive families are quickly used for everything, and this can produce the misleading impression that they are a crucial feature of Agda (despite the fact that so much can be done without).
For all the examples suggested so far by Ulf and Peter, the impredicative coding of inductive families would provide a cleaner implementation. The examples where inductive families are used with dependent eliminations are really subtle and rare.
Is Vec-as-a-function usable?
The Vec example has been discussed above. While it is possible to define vectors using a function, do we want to do that? The Vec type is often seen as a more precise version of the List type, and this is most easily seen when they are both defined as data. I don't think we should scare away functional programmers, who are familiar with data types (and now also GADTs), by defining Vec in a different way from List.
Furthermore, when functions are defined by explicit recursion, functions defined over Vec defined as a function are quite a bit more verbose than those defined over Vec defined as an inductive family:
map : forall {a b n} -> (a -> b) -> Vec a n -> Vec b n map f [] = [] map f (x ∷ xs) = f x ∷ map f xs map' : forall {a b} n -> (a -> b) -> Vec' a n -> Vec' b n map' zero f tt = tt map' (suc n) f (x , xs) = f x , map' n f xs
It also feels wrong to pattern match on a pair, when I want to pattern match on a cons. This can be worked around as follows (based on code written by Ulf):
data Nil : Set where nil : Nil data Cons (A As : Set) : Set where _::_ : A -> As -> Cons A As Vec : Set -> Nat -> Set Vec A zero = Nil Vec A (suc n) = Cons A (Vec A n) map : forall {a b} n -> (a -> b) -> Vec a n -> Vec b n map zero f nil = nil map (suc n) f (x :: xs) = f x :: map n f xs
Finally Vec defined as a function has a tendency to be harder to work with, since the definition unfolds when the natural number is not neutral. There is a workaround for this as well, adding even more overhead (code by Ulf):
-- When defining types by recursion it is sometimes difficult to infer implicit -- arguments. This module illustrates the problem and shows how to get around -- it for the example of vectors of a given length. module Introduction.Data.ByRecursion where data Nat : Set where zero : Nat suc : Nat -> Nat data Nil : Set where nil' : Nil data Cons (A As : Set) : Set where _::'_ : A -> As -> Cons A As mutual Vec' : Set -> Nat -> Set Vec' A zero = Nil Vec' A (suc n) = Cons A (Vec A n) data Vec (A : Set)(n : Nat) : Set where vec : Vec' A n -> Vec A n nil : {A : Set} -> Vec A zero nil = vec nil' _::_ : {A : Set}{n : Nat} -> A -> Vec A n -> Vec A (suc n) x :: xs = vec (x ::' xs) map : {n : Nat}{A B : Set} -> (A -> B) -> Vec A n -> Vec B n map {zero} f (vec nil') = nil map {suc n} f (vec (x ::' xs)) = f x :: map f xs
Thierry writes that "We need to make type-checking as simple as possible", and I partially agree, but not at the expense of making programming unnecessarily complicated.
/NAD
Comments by Ulf and NAD:
Thierry wrote: The examples where inductive families are used with dependent eliminations are really subtle and rare.
We don't think this is true. A simple example: consider programs in a simple stack language where we keep track of the height of the stack:
data Prog : Nat -> Set where init : Prog zero push : forall {n} -> Prog n -> Prog (suc n) pop : forall {n} -> Prog (suc n) -> Prog n
This family is very hard to implement recursively, since indices both increase and decrease, and we clearly need dependent elimination in order to reason about properties of the programs. For a complete example see examples/StackLanguage.agda
Discussion about inductive families (rough notes by Patrik)
Can Agda have a flag turning on or off
- general inductive families?
- the K-rule?
- dependent elimination?
Conor: The way you structure your problem really depends on if you have general inductive families.
Peter:
- we all accept restricted inductive families (RIF)
- ( ... we accept constructor form indices )
- we also accept those families which can be easily reduced to RIF (where index has decidable equality)
Conor:
- I have many examples of higher order indexes
- the encoding behaves differently when evaluating -- Sketch of example: data Code : Nat -> Nat -> Set where PUSH : {i : Nat} -> Nat -> Code i (suc i) ADD : {i : Nat} -> Code i (suc (suc i)) (suc i) -- ... -- Sem i j : Vec Nat i -> Vec Nat j data CodeSem : (i j : Nat) -> Sem i j -> Set where
PUSHS : {i : Nat} -> (n : Nat) -> CodeSem i (suc i) (cons n) -- ...
- -- end of Sketch
Carsten: Twelf is "special purpose" - only "forall exists" properties.
Ulf:
- we want an experimental system
- we also want a "safe" core system
- we need a translator
- we use a flag to restrict to the "safe" subset of the full language (which could be translated)
Makoto:
- we (CVS) don't have "fundamentalist concerns"
Peter: related topic:
- there is a corresponding situation for termination checking and strict positivity checking.
- until we have a clear definition of these checks we don't have a logic, only a framework
Ulf: correct (but who will do the job?)
Makoto/Conor: Epigram 2 (core) has a fixed (closed) universe - you only add explicit definitions, no implicit definitions (eliminators etc.)
Carsten: who are the users, and what are their wishes?
Nils Anders: we have discussed two possible directions before - proofs or programs. And we concluded that the competition in the proof part is too harsh so we should focus on "dependent types for programming".