Notes from the CoqPL 2024 Q/A session

CoqPL 2024 (see call for presentations) had a Q/A session with Coq developers (@ejgallego, @ybertot, @rtetley, @mattam82) and the audience in the room. Below are my notes of the questions and answers.

Q: on the rename, why move from “proof assistant” to “(interactive) theorem prover”?

A: this is not a very intentional choice on our part.

Q: I would like to use notations well, but I don’t know how to and the manual is not good enough. Could we have a library of documented/annotated examples of advanced usage?
To everyone: if you have a usage example of Coq notations that you find interesting, could you drop me (Benjamin Pierce) an email?

Q: current recommendations for IDEs for Coq? Still CoqIDE or vscode?

A: my (Yves’) take on that is that CoqIDE should be phased out because it is too much of an effort to maintain. vscode or vscodium would be the road to go, I think. We have two experiments on the topic right now.

Q: really happy to see more movement towards sort polymorphism. We have bug reports in Iris due to incompatibilities from different libraries pinning universe levels in incompatible ways. There is another topic that I would like to bring up (Ralf Jung). Every week I run into an issue caused by unification: I want to use apply and it does not work, I have to use rewrite instead. There is no unification on the roadmap. What is the status of that?

A: once we have sort polymorphism, it should be possible to just make the inductives polymorphic in the standard library. It should be.

A: for unification, we have two unification algorithms in Coq, one is used by tactics and another is used by typing, there are incomparable. The idea of unifall is to move everyoen to the one used by typing. It was blocked by this compatibility layer for primitive projects. RIght now I think we could unblock this problem, and go on with the unification of unification.

Q: defining coercions, there is a “uniform inheritance” condition. I stumbled upon a bug report that suggests it has been removed, but I still am getting warning about it.

A: my recollection is that it was made optional as it was too restrictive. You should ask on Zulip.

Q: ecosystem question, in my own project I ended up switching from coq_makefile to Dune. I don’t know if I regret it. I wonder what’s the community opinion on those two approaches. Is there a plan / an opinion on the long term?

A (Emilio): I don’t think there is a position. Make and Dune are really different in the approaches they use. It’s clear to me that Make does not cut it anymore, for large codebases for example. Dune should become the standard build system – but people who want to use Make can still do it. You can save several hours per week by using Dune. I thought we could migrate to Dune in one week (laughs), but it took much longer than expected. I think most users will find Dune better and easier.

A (Tej): I’m still using Makefiles everywhere. I don’t know how to setup a Coq Dune project. For makefiles I copy-paste the standardpp makefile and everything works beautifully. I think it is a matter of documentation.
gasche42

Q: there was a change to Coq modules recently that required changing the Dune files quite a bit. Will the interface become stable in the future?

A (Emilio): the config language of Dune is versioned. Once we hit 1.0, it should keep working in the indefinite future.

Q: you mentioned Ltac2. I am looking forward to stop using Ltac1. Something that feels missing to me: my users are using Ltac1 in their proof scripts, and I think between Proof. and Qed. Ltac1 is fine. But then when you try to port something to Ltac2 for automation, you are stuck.
For example “intro patterns”, dealing with a list of intro patterns in Ltac2 is a pain, you keep having to go back to Ltac1 locally, it is annoying. I have seen on the roadmap, “porting ssreflect tactics to Ltac2”, to me that is less of a priority, I would to make it easy to migrate Ltac1 automation to Ltac2. There are too many APIs missing right now.

A: I think you should contact Gaëtan with your request about the language.
We want to get in a situation where we forget about Ltac1. Have the proof scripts be nice.

Q: I just wanted to say “Thank you”. I am using Coq mainly on Windows, and this has going significantly more painless over the last few years, especially with the Coq platform. Last year there were still some secret steps, but now there are documented and automated better. Thanks!

A: the platform is a bit of a worry in a way, until now it was a one-man project, which is always a bit of a risk. Romain Tetley joined to help Michael Soegstrop. If more people wants to join, it would be great to make it more of a collective effort. The Coq platform should be collaborative, not centered on the Coq core developers. We are going to try to make it happen by documenting as much as possible the inner bits.

Q (Amin): “better support for domain-specific logic”, what is the plan there?

A (Matthieu): have the metalanguage be more flexible in how you can program the elaborator and the tactics. In general we are looking to finetune the application of the Iris tactics for example. We are always interested in this kind of examples.

Q2 (Amin): the Iris proof mode generates very large proof terms. Slow Qed times. Is there anything you could do there to help?

A2: do it more reflectively, maybe?

A2’ (Tej): on Qed I have a trick which is to replace Qed by Admitted during CI. I would love to see this better supported by Coq. Check proofs occasionally.

A3 (Emilio): there is this experimental compiler called FCC which can do that.

An: Ralf: we thought about this hard, but things that should be O(1) are O(n), and we don’t see how to avoid that. We thought about avoiding strings, but nothing we could think of could get rid of that fundamental complexity issue.

Yves: it means we have some work to do as researchers.

Yves: this is the kind of things we are talking about, Iris is a frontrunner, we want to make it easier for Iris and for everyone else.

Q (Gregory): how many people have a real problem with proof term size and type-checking at QED time?

Q: there is a big question and bundling and unbundling, how this affect math classes and all that. Is there work on this, more memoization.

A: we tried hashconsing before and did not work out that well. There is work on introducing “constant with arities”, which avoids let you avoid repeating parameters of a constant, which we plan to look into to avoid redundancy.

Q (Ralf): part of my understand for why typechecking is hard in Coq is that it’s a global imperative step because of global universe levels. I understand that Lean does not that issue. Can we do better?

A (Matthieu): the type-checker is fully functional, the universes are dealt with separately. I don’t think universes are the issue.

Q (François): complexity of the cost of applying of a lemma. If I have a lemma that says (P → P), is the cost of applying it constant, or linear in the size of P? (If I apply this term a lot with the same P, will the result type-check P many times?)

A: yeah, possibly.

Q: can we hear more about the integration of observational type theory in Coq?

A: using the rewrite rules there is a prototype implementation of this by Loïc Pujet and Nicolas Tabareau, a version of OTT working with impredicate Prop. We don’t know yet whether we want to integrate it in the kernel or stick with rewrite rules.
The best exposition is their paper.

Q (Gregory): we experimented at Bedrock Systems with fine-tuning term reduction, we think they are really valuable. If you are interested in that, I think we should chat.

A (Matthieu): I know Clément Pit-Claudel is interested in that.

Q: what’s the recommendation on the standard library to use? stdlib?

Q (Bas): what’s the status of SMT integration in Coq?

A (Yves): part of the work on Trakt is to try to do all the translations you would want to be able to use SMT solvers. This specific piece of work is not taken on by the core developers. People we know like Chantal Keller is working on that. We don’t really have a roadmap.

Q (Bas): are there known projects actively using SMT?

A (Yves): not that I know of.

4 Likes

It’s certainly possible to provide examples. There are advanced examples e.g. in Iris, MathComp, and even Software Foundations, etc. The test-suite files of the Coq archive with name Notation in it also provides advanced examples.

This being said, the strength of the notation system is limited by the underlying LL parsing engine, which requires rule factorization at a global scale, and this is something relatively difficult to explain and document, if ever that’s part of the question.

I’m willing to participate to a group interested in providing more examples and/or improving the documentation if ever such group is set up.

There are people interested in maintaining CoqIDE, so, there is no reason to “sacrify” it a priori but we indeed foresee that vscode/vscodium scales better than CoqIDE for the needs of Coq: by relying on vscode/vscodium, we take benefit of a large and active community ensuring the daily well-functioning and scalability of the generic parts of the UI, providing various useful and creative features and extensions (e.g LiveShare, …), altogether with standardized communication protocols. The graphical quality and aesthetics of vscode/vscodium is also quite fashionable these days. Maybe experts would like to add more arguments…

Type-checking time at QED should not be slower than evaluating the tactics building the proof. A typical cause of being slower (and even exponentially slower) is that the type-checker does not preserve the history of reduction steps used in the proof (unfold, simpl, …) and try to recheck from scratch why two terms are convertible (see e.g. this typical example of long Qed : "Qed." takes a long time). In principle, it is certainly possible to do something but I don’t know precisely how far some developers are already looking on their free time at this issue or not.

Re. notation examples: my understanding is that @bcpierce would be interested indeed in pointers to specific examples (maybe less in the ones in Software Foundations), possibly with some explanation of what they do and how they work. Would you ( @herbelin ) be willing to create a dedicated forum thread on this question, to get the ball rolling?

As Gabriel says, I am very interested in gathering examples (and guidance) on advanced uses and best practices with the notation system. Even after struggling with it pretty seriously (and even with generous help from you, @herbelin!) I still do not feel I really know how to make best use of what’s provided.

My suggestion at the CoqPL meeting was to try to collect a little compendium of streamlined examples with a bit of inline explanation, extracted from various existing developments. A dedicated forum thread would be an excellent place to discuss how best to do this.

As said, MathComp (e.g. the bigops library) or Iris use advanced notations quite a lot. In my experience with developing custom notations for Software Foundations, the problem was not so much about notations (besides known wishes like a better support for numerals, strings, identifiers, etc. in custom notations) than about how to have the resulting grammar fit the LL constraint of the underlying grammar engine. So, my understanding, based on the needs of SF, is that you might be looking more for examples of grammars (in the sense of a collection of notations that fit well together) than of notations per se.

I can do that, e.g. on Zulip. What kind of advertizing do we do?

Yes, I do mean whole grammars!

A dedicated zulip forum thread sounds good. We can advertise it on the main forums and mailing lists when it’s set up…

So, I opened a stream “Coq Notation system” on Coq’s zulip. To ensure a minimum of traffic, the idea is also that questions about notations that are posted on the stream “Coq user” would be moved to this stream.

If necessary, wiki pages could be done out of the discussion happening in this stream.

Time to try it…