I have been sitting on a draft of this for way too long, it’s about time I get this published. This post explains the exponential blowup that is caused by fully unbundling typeclasses. It also mentions some other pitfalls that we ran into when implementing the algebraic hierarchy for Iris.
@RalfJ this seems like a problem one can just throw machine resources and tooling at to keep going for quite a while. For example, we got nearly 30x proving speedup over many commits in a large project by doing both selection + fine-grained parallelization, and with Dune and other tricks things could go even faster. So is there really a “hard” limit to the unbundled approach?
With complexities of n^5 and more, I doubt KIWI (kill it with iron) is going to be sustainable. We are not just talking about consuming large amounts of CI time, this kind of system quickly gets annoying to use interactively (so I don’t think selection/parallelization can help).
Thanks for writing this up. The issue is already observed in our paper (p27).
In more complex parts of our development
we are now experiencing increasingly serious efficiency problems, despite having
already made sacrifices by artificially inhibiting many natural class instances in
order not to further strain instance resolution. Fortunately, there is plenty of po-
tential for improvement of the current instance resolution implementation. One
source is the vast literature on efficient implementation of Prolog-style resolution.
I understand lean is much faster than Coq. Moreover, it has some good combinators. @CohenCyril and @Vincent have also worked on this topic.
A bit of discussion is here.
Thanks for writing this up. The issue is already observed in our paper (p27).
Thanks, I’ll add a remark along those lines. I didn’t mean to imply that we were the first to discover this issue or anything like that.
The second is instance resolution efficiency. In more complex parts of our development we are now experiencing increasingly serious efficiency problems
I read this as referring to the time it takes to find an instance (which also can be a problem), as opposed to the size of the resulting instance. The latter is in some sense worse because the larger term increases RAM usage, Qed time and also any other pass during interactive proofs that (for whatever reason) traverses parts of the already computed term again.
That’s something I’d like to understand at some point. AFAIK they used to use a more bundled approach, which does not suffer from an exponential blowup, but moved to indices recently. I was once told that lean exploits sharing in the terms better than Coq does. But also Lean has not been used for developments the size of what ssreflect was built for, so they may just not have hid the “bad part” of the high-degree polynomials in their complexity yet (if they got them).
They have a type class implementation, but provide some mechanisms for
extending and renaming structures.
My understanding is that it is light weight, and the repackaging that is done behind the
scenes. It looks helpful and natural.
The super unbundled version Class Monoid (A: Type) `{Semigroup A, Unit A} := ... looks so weird to me, I didn’t even consider it until reading the post.
Class Monoid (A: Type) `{Op A, Unit A} := ...
This limits the “index depth” to 2, and thus limits term complexity to O(n^3). Still not great, but at least it doesn’t grow further as we add more sublcasses. However, this scheme does not support diamonds in the hierarchy as instances will be duplicated.
(typo sublcasses btw)
I’m not sure if I’m interpreting this correctly, is this the issue where eg when you have an instance “foo is a field” and an instance “foo is an ordered ring” that gives 2 different ways of getting “foo is a ring”? IIRC I encountered that when doing mathclasses-for-hott but I’m not sure how much of a problem it is for non-hott settings.
No, that’s an entirely different issue. What you are describing is exponential search time because there are multiple ways to derive the same thing. I mentioned that in the conclusion as a problem with a bundled typeclass-based approach. What I am mostly talking about however is exponential term size, completely independent of how TC search comes up with that term. We couldn’t do better by fully writing everything out manually.
A late reply I am afraid. But LEAN and mathlib structures are also discussed in a blogpost by Tom Hales, who expresses concerns regarding both efficiency and the current design of the hierarchies:
I should note that this observation is not new, it already occurs in François Garillot’s PhD thesis … The goal of this post is to provide a more self-contained presentation, not requiring all the context set up in that thesis …
However, that paper does not talk at all about the exponential-blowup problem with non-packed approaches. Or have I just overlooked that discussed? I did Ctrl-F for “exponential”.
The size of the terms involved grows as C^n, where C is the number of separate components of a structure, and n is the structure nesting depth. For Cayley-Hamilton we would have C = 15 and n = 3, and thus terms large enough to make theorem proving impractical, given that algorithms in user-level tactics are more often than not nonlinear.
As the duplication of type classes/canonical structures still seems to be an issue.
Perhaps the work on the hierarchy builder could be a way to move more flexibly between the two options.
Jacques Carette and his student Musa Al-hassy have been thinking about
generating algebraic structures too.
They have a prototype in elisp for agda which allows one to freely move between bundled and unbundled.