Why dependent type theory?

I’m in the process of writing my thesis on proof assistant performance bottlenecks (with a focus on Coq), and there’s a large class of performance bottlenecks that come from (mis)using the power of dependent types. So in writing the introduction, I want to provide some justification for the design decision of using dependent types, rather than, say, set theory or classical logic (as in, e.g., Isabelle/HOL). And the only reasons I can come up with are “it’s fun” and “lots of people do it”

So I’m asking these mailing lists: why do we base proof assistants on dependent type theory? What are the trade-offs involved?

I’m interested both in explanations and arguments given on list, as well as in references to papers that discuss these sorts of choices.

Thanks,

Jason

3 Likes

You might be interested in Lamport and Paulson’s “Should Your Specification Language Be Typed?” (https://dl.acm.org/doi/10.1145/319301.319317). Lamport’s publications page includes some commentary on how the paper came to be.

-Colin

Within type theory certain mathematical ideas can be expressed with dependent types only.

In particular, if the type of the result returned by a function depends on the argument, dependent types are required.

The example I chose is a function that returns the first unit vector of a given dimension: https://www.owlofminerva.net/files/formulae.pdf#page=12

Let me quote from an earlier email available at https://groups.google.com/d/msg/metamath/Fgn0qZEzCko/bvVem1BZCQAJ :

But it is also clear that HOL’s type system (and therefore Isabelle/HOL’s type system) is poor.
Freek Wiedijk on the (current) HOL family: “There is one important reason why the HOLs are not yet attractive enough to be taken to be the QED system:
• The HOL type system is too poor. As we already argued in the previous section, it is too weak to properly do abstract algebra.
But it is worse than that. In the HOL type system there are no dependent types, nor is there any form of subtyping. (Mizar and Coq both have dependent types and some form of subtyping. In Mizar the subtyping is built into the type system. In Coq a similar effect is accomplished through the use of automatic coercions.)
For formal mathematics it is essential to both have dependent types and some form of subtyping.” [Wiedijk, 2007, p. 130, emphasis as in the original]
https://owlofminerva.net/files/fom_2018.pdf#page=10

For example, in HOL or Isabelle/HOL group theory cannot be expressed naturally (since type abstraction is missing), not even mentioning functions involving dependent types like that returning the first unit vector of a given dimension as mentioned above.

Note that there are also formulations of dependent type theory with classical foundations, see: https://owlofminerva.net/files/fom_2018.pdf#page=1

I would consider type theory superior to set theory as type theory is a systematic approach, whereas the axioms of set theory are historically contingent.

Kind regards,

Ken Kubota

Which bottlenecks are you referring to? Are they intrinsically tied to dependent types, or they are related to the treatment of propositions and equality in systems such as Coq?

The main bottleneck that I’m referring to here (though not the only one of my thesis) is the one that is due to the fact that arbitrary conversion problems can happen during typechecking. This is used to great advantage in proof by reflection (where all of the work is done in checking that a proof of “true = true” has the type “some-check = true”). But it also causes performance issues if you’re not careful.

To take a toy example, consider two different definitions of factorial: n! = n * (n - 1)!, and n! = (n - 1)! * n. Suppose you have two different ways of computing a vector (length-indexed list) of permutations, one which defines the length in terms of the first factorial, and the other one which defines the length in terms of the second factorial. Suppose you now try to prove that the two methods give the same result on any concrete list of length of length 10. Just to check that this goal is valid, Coq is now trying to compute 10! as a natural number. This example is a bit contrived, but less egregious versions of this issue abound, and when doing verified engineering at scale, these issues can add up in hard-to-predict ways. I have a real-world example where changing the input size just a little bit caused reflexivity to take over 400 hours, rather than just a couple of minutes.

Hi Jason,

There’s many reasons for me to choose a dependently typed language:

  • From a philosophical point of view, dependently typed languages are the only kind of language that have been given a meaning explanation (in the sense of Martin-Löf). This is important because it means I can grasp the meaning of each rule/axiom on a fundamental level, something that I never managed for set theory.

  • From a practical point of view, in a dependently typed language I don’t have to learn and use separate languages for writing programs and for writing proofs about them. In fact, by giving the write types to my programs I often do not need to give any proofs at all (i.e. correct-by-construction programming).

  • Dependent type theory is constructive so I do not have to assume classical principles when I do not need them.

  • Since expressions at the type level can compute in dependent type theory, there are many equalities that I do not need to reason about explicitly but just hold by definition/computation (although not as many as I would want).

As for other interactive theorem provers not based on dependent type theory, such as Isabelle or HOL4, I am mainly jealous of their excellent integration with automated theorem proving tools (e.g. sledgehammer) and their huge mathematical libraries. In a dependently typed languages, it is much harder to get everyone to agree on a definition because there are so many different ways to encode a concept. This has the effect that libraries are a lot more fragmented and everything is done in a more ad-hoc manner.

Another weak point of current implementations of dependent type theory is abstraction: you can make a definition abstract but that means it becomes an inert lump that stubbornly refuses to compute, so you lose most of the benefits of working with a dependently typed language. But I see these as reasons to continue to improve dependently typed languages rather than to abandon them.

I’m probably an outlier on this topic. The point of my project was to
use dependent types as a way to guide a less experienced user (perhaps
an experienced software enginner with little proof background) to a
proof. Although, I only had myself as model for “less experienced
user”. The goal was to enable such a user to generate proven-correct
code from program specifications, where the specifications included
comprehensive invariants which are coded as dependent types.

What dependent type encoded invarants do to a proof is that the
constraints they impose make much clearer what can and can’t be
accomplished in each proof step, freeing the user from having to
conceptualize the overall proof, and limiting the amount of proof
exploration (trial and error) needed. They also enable a back
translation from a proof dead end to some understanding
about why the developing algorithm won’t work (i.e.: such-and-such
invariant is now impossible to maintain due to these particular
proof steps). In this way, a proof assistant becomes a programming
assistant.

The generated code had all of the dependent type aspects erased, as
they were all in Prop.

I don’t know how this can be accomplished without dependent types.
Especially given a user that may have a very good understanding of what
particular dependent types encode in terms of invariants, but has
little or no experience with proof techniques and probably no theory
background at all.

I don’t think I ever hit any performance bottlenecks in this
development that weren’t related to the various (e)auto proof search
automations doing a less than impressive job of pruning the search. I
have an example that I proved both with and without (e)auto proof
search - without it takes coqtop less than 7 minutes to complete on my
rather dated hardware. With (e)auto, it takes about 2 hours.

In my particular case I have been interested in type theory because

allows definitions of primitive recursive functions that are terminating.

Regards.

For example, mathematicians write:
"In any field F there is an inversion operation for a nonzero elements:
inv : F{0} -> F
".
With dependent types, the type of inv is expressed as

   (x : F) -> x /= 0# -> F

This type depends on the value x, and this value can change during some
computation.
And I do not know how to express as a type this mathematical notion.
There are many such examples.

Also dependent types are good for constructive proofs.
For example, consider a theorem
“For any prime number p there exists a prime q greater than p”.
With dependent types, we prove a stronger statement:
“This provided below algorithm A takes any prime number p and returns a
prime q greater than p”.
If one cannot invent a constructive proof, but can prove the thing in a
classical logic,
then one can add the postulate of excluded third and write a classical
proof in Agda.

Regards,

I really enjoyed the following video with Thorsten Altenkirch which speaks informally about Type Theory and Set Theory:

https://youtu.be/qT8NyyRgLDQ

The follow-up video has some further commentary https://youtu.be/SknxggwRPzU.

Since you’re writing a thesis with a focus on Coq, I’m sure you understand the details. You might enjoy the informal distinctions between Type Theory and Constructive Mathematics vs Set Theory and Classical Logic.

I would summarise that Constructive Mathematics (using Type Theory) is better because it provides more confidence in your proofs.

Best,

Steve.

1 Like

There is a problem of the type checking cost in Agda, and probably in
Coq too.
I do not know of whether it is fundamental or technical. And I have not
seen an answer to this question, so far. On practice, it looks like
this:
Agda can type-check only a small part of the computer algebra library of
methods (with full proofs). With implementing it further, with
increasing the hierarchy level of parameterized modules, the type
check cost seems to grow exponentially in the level.
So, after implementing in Agda an average textbook on computer algebra
(where is known a constructive proof for each statement), say, of 500
pages, it will not be type-checked in 100 years.

Probably, this is a difficult technical problem that will be practically
solved during several years.

Regards,

An additional advantage of Type Theory is that it maintains the abstraction barrier, whereas, in Set Theory, you can look inside to see how your mathematical objects are encoded as sets.

Dependent types are good for pure mathematics (classical or
constructive). They are the natural home to define group, ring, metric
space, topological space, poset, lattice, category, etc, and study them.
Mathematicians that use(d) dependent types include Voevodsky (in Coq)
and Kevin Buzzard (in Lean), among others. Kevin and his team defined,
in particular, perfectoid spaces in dependent type theory. Martin

The paper http://home.in.tum.de/~immler/documents/immler2018manifolds.pdf has
some interesting discussion of the difficulty they had with the lack of
dependent types in setting up the basics of differential geometry in
Isabell/HOL:

Following this approach, we avoid the need to define the type R^n
parametrized by a natural number n, which is impossible in Isabelle/HOL’s
simple type theory.

Kevin Buzzard makes an interesting challenge at

to non-dependently typed theorem provers:

Does this mean that there are entire areas of mathematics which are off
limits to his system? I conjecture yes. Prove me wrong. Can you define the
Picard group of a scheme in Isabelle/HOL? I am not sure that it is even
possible to write this code in Isabelle/HOL in such a way that it will run in
finite time, because you have to take a tensor product of sheaves of modules
to define the multiplication, and a sheaf is a dependent type, and your clever
HOL workarounds will not let you use typeclasses to define module structures
on the modules which are values of the sheaves.

(We don’t have the Picard group of a scheme in any theorem prover, today, as
(far as I’m aware.)

I won’t say that Lean and Coq “come naturally” to mathematicians (having to
learn intricacies such as typeclass resolution, and having 4 or 5 different
types of brackets, are not selling points), but even though mathematicians
don’t officially know anything about type theory, in practice they already
understand dependent types very well, but would be horrified to have to do
without them.

best regards,
Scott Morrison

First of all I don’t like the word “dependent type theory”. Dependent types are one important feature of modern Type Theory but hardly the only one.

To me the most important feature of Type Theory is the support of abstraction in Mathematics and computer science. Using types instead of sets means that you can hide implementation choices which is essential if you want to build towers
of abstraction. Set theory fails here badly. Just as a very simple example: in set theory you have the notion of union, so for example

{0,1} \cup {0,1,2,3} = {0,1,2,3}

However, if we change the representation of the first set and use lets say {true,false} we get a different result:

{true , false} \cup {0,1,2,3} = {true,false,0,1,2,3}

This means that \cup exposes implementation details because the results are not equivalent upto renaming. In Type Theory we have the notion of sum, sometimes called disjoint union, which is well
behaved

{0,1} + {0,1,2,3} = {in1 0,in1 1,in2 0,in2 1,in2 2,in2 3}

{true , false} + {0,1,2,3} = {in1 true,in1 false ,in2 0,in2 1,in2 2,in2 3}

Unlike \cup, + doesn’t reveal any implementation details it is a purely structural operation. Having only structural operations means that everything you do is stable under equivalence, that is you can replace one object with another one
that behaves the same. This is the essence of Voevodsky’s univalence principle.

There are other nice aspects of Type Theory. From a constructive point of view (which should come naturally to a computer scientists) the proporsitions as types explanation provides a very natural way to obtain “logic for free” and paedagogically
helpful since it reduces logical reasoning to programming.

There are performance issues with implementations of Type Theory, however, in my experience (mainly agda) the execution of functions at compile time isn’t one of them. In my experience the main problem is to deal with a loss of sharing
when handling equational constraints which can blow up the time needed for type checking. I think this is an engineering problem and there are some suggestions how to fix this.

Thorsten

1 Like

To me the most important feature of Type Theory is the support of
abstraction in Mathematics and computer science. Using types instead of
sets means that you can hide implementation choices which is essential
if you want to build towers of abstraction. Set theory fails here badly.

This is true, but I am also afraid that “dependent type theory” is a
regression on that point, compared to, say, ML.

Conversion is a fanciful abstraction-breaker that is a well-known source
of non-modularity in dependently-typed languages. Worse, there is not
even a proper way to abstract over it, except for monstruosities such as
Extensional Type Theory, where you throw the baby with the bathwater.

The weak type system of ML is a strength in this setting. As long as you
don’t touch the interface, you can mess with the implementation and it
will still compile. (Running properly is another story.)

PMP

1 Like

The main bottleneck that I’m referring to here (though not the only one
of my thesis) is the one that is due to the fact that arbitrary
conversion problems can happen during typechecking. This is used to
great advantage in proof by reflection (where all of the work is done in
checking that a proof of “true = true” has the type “some-check =
true”). But it also causes performance issues if you’re not careful.

Without going fully reflexive, I have been told several times that the
computational nature of conversion was actually increasing the
performances compared to systems that do not feature it. In particular,
the meme-y “proving 2 + 2 = 4 requires on the order of 10⁷ symbols” has
some hint of truth in conversionless theories. Computation allows for
much more compact proofs, hence more efficient to typecheck. Memory is
not free.

(Note that I have no empirical proof of that, I am merely conveying
hearsay.)

Obviously, you can write (almost) arbitrary computations in conversion,
but then, I find it somewhat strange to complain about it. You can write
superexponential algorithms in C++, but that’s not a valid reason to
rant against the language. I really wonder why people blame
dependently-typed languages instead of their own inefficient programs.

PMP

Good point. However, this is on a different level. While you can replace one object with another that behaves the same this doesn’t mean that you can replace one piece of program text with another that produces an object that behaves the same. Clearly replacing program text must be an intensional operation. When you design your software or proof you need to take care that you support the right kind of parametrisation. Having types is essential but it doesn’t mean that good design comes for free. As far as the dependency of code from arbitrary implementation choices is concerned I agree that we need to be more flexible about the distinction of strict and definitional equality.

Thorsten

Hi,

following up with Thorsten’s command about the word “dependent
type theory”, I would like to add a few observations I had in this
discussion:

  • I think the word “type theory” itself is unclear in this
    context. At least several of the emails seem to use different
    but related ideas of what that means:

  • It could mean “something where everything has a type” (i.e.,
    not the usual ZF). Then HOL would be type theory. (Thorsten’s
    email quoted below makes sense in that setting because HOL
    avoids the problem described there.)

  • It could mean the above with dependent types (but not
    necessary the Curry-Howard thing from the next item)

  • It could mean “a system where we use the same language for
    propositions and proofs via Curry-Howard isomorphism” (I
    believe this will then also need dependent types since
    otherwise the proof terms are not powerful enough)

  • It could mean a system with strong normalization (so that
    everything terminates), at least some of the answers seem to
    see this as part of the meaning of “type theory”.
    Of course, there are many interaction between the different
    concepts, but if we talk about the costs or benefits of adopting
    type theory, it becomes quite important which aspect of it we are
    adopting. (E.g., if we have, say, a good reason why we need the
    second point, and a big cost for getting the four point, then it
    is important not to just say “type theory has the following good
    reason and the following cost”.)

    Maybe when discussing why type theory, we should prefix our
    answers by what we mean by type theory (e.g., one of the above).
    Otherwise it will be very confusing (at least to me).

Another question is also the context in which we want to use it:

  • Programming (with all the associated things like verification,
    type checking, etc.)

  • Math
    These have different requirements, so making explicit which
    domain we are thinking of in our answer might make things clearer
    as well.

    Just my personal thoughts, but I hope they may help to add some
    clarity to the discussion.

Best wishes,
Dominique.

First of all there is a difference between “type theory” and “Type Theory”.

The former refers to the theory of types as used in programming while the latter refers to type theory within the tradition started by Per Martin-Loef, incorporating among others concepts dependent
types, universes, inductive types, etc. Now it is correct to say that using capital letters cannot hide the fact that there is more than one instance of Type Theory because it is an evolving concept. However, the same applies to set theory, there are many
variants of it and in a discussion you may have to state to which one you refer.

However, as for set theory, there are some remarks you can make about “Type Theory” in general and then it is not necessary to make an exact reference to a particular formal system. In my email I
made some general remarks but then referred to univalence which refers to Homotopy Type Theory, again there is more then one formal system but at least it narrows down the scope.

Thorsten

I’m confused by this. Are you saying that in Agda typechecking is
exponential in the number of files? Or exponential in the number of
nested abstractions? Or something else? Do you have a toy example
demonstrating this behavior?

No toy example, so far, but I think such can be provided.

I have a real-world example of the DoCon-A library for algebra:

Index of /pub/local/Mechveliani/docon-A/2.02

This is a small part of the intended general purpose library for algebra
(for algebra methods, it is very small, but comparing to the Agda
practice, it is large).

It is written in install.txt
"for the -M15G key (15 Gb heap) installation takes about 50 minutes
on a
3 GHz personal computer.
"

It looks like the type checker has exponential cost in the depth of the
tree of the
introduced parametric module instances.

There is a hierarchy of “classes” (classical abstract structures):
Magma, (commutative)Semigroup, (Commutative)Monoid, (Commutative)Group,
… ,
(Commutative)Ring, Field, IntegralDomain, EuclideanDomain, GCDDomain,
LeftModuleOverARing …
– this tree depth may grow up to, may be, about 25.

And there are domain constructors: integer, vector, fraction,
polynomial, residueRing …
And these constructors are provided with instances of some of the above
abstract structures.
These instances include implementation for their needed operations, with
proofs.

The type checker deals with a hierarchy of such instances. And it
performs evaluation
(normalization …) with very large terms representing these instances.
For example, the Integer domain has may be 20 instances in it, and this
large term is
substituted many times on other terms, because almost every domain uses
some features of
the Integer domain. Anyway there appear internally very large terms that
repeat many
times, and their embracing terms need to normalize.
Further, the domain

Vector (EuclideanRingResidue f (Polynomial (Fraction Integer)))
(D)

is supplied with five instances of Magma, five instances of Semigroup,
five instances of CommutativeSemigroup, five instances of Monoid,
five instances of CommutativeMonoid, and also many other instances.
And the class operations in these instances (and their proofs) are
implemented
each in its individual way.
The number of different instances of the classical operations grows
exponentially
in the constructor depth for the domains like (D).

I do not expect that in mathematical textbooks appear domain constructs
as (D)
of the level greater than 10.
But Agda has practical difficulties with the level of about 4.
Because each construct like (D) is further substituted to different
parametric modules.
Because the method M1 uses one item from (D), so it is implemented in
the environment of
a parametric program module to which (D) is substituted for a parameter,
the method M2 uses another item from (D), and so on. And large subterms
get internally copied.

In a mathematical textbook, all these substitutions are mentioned or
presumed, and are
understood by the reader. So the informal “rigorous” proofs fit, say,
200 pages
(~ 100 Kb of memory).
But when a type checker tries to understand these constructions, it
creates many copies of large subterms and spends the cost in normalizing
them.
And formal proofs take 15 Gb memory to check.

First, this copy eagerness can probably be reduced (probably, this is
not easy to implement).
Second, something can be needed to arrange in the style of the
application programs.
There is a paper of Coq about this style, I recall A.Mahboubi is among
the authors.

So, there is a fundamental restriction – which hopefully can be handled
by introducing a certain
programming style (I never looked into this).
And also there is probably something to fix in the type checker in Agda.

Regards,