I would like to ask about people’s opinion on and experience with using axioms in Coq developments.
Pragmatically, excluded middle, functional extensionality, proof irrelevance, make so many things easier. It seems so intuitive to think about programs in terms of classical logic and set theory. So the point of view that “axioms are okay to use” seems quite reasonable.
It seems that in practice, axioms are often avoidable, at the cost of some effort. For example, the law of excluded middle is unnecessary when properties are actually decidable, but you have to show how to decide them. And even when they’re not, theorems can be twisted into classically equivalent versions which are nevertheless provable intuitionistically. Some might argue the result is also more elegant this way. I wonder whether there are general principles to improve the prospects of axiom-freedom in a project.
So an axiom-free way may be possible, but what are its benefits? One issue is that some common axioms are inconsistent with each other, so not relying on them makes developments “portable” between “formal subsystems” of Coq. (What are good examples? Are there such problems with extensionality?)
What are other reasons? Maybe some of us are addicted to the glowing light of Closed under the global context.?
The most famous example of incompatible axioms is (I think) axiom K and univalence. Axiom K makes it easier to pattern match (à la Agda) and univalence is getting more popular as it is supposed to make formal proofs closer to usual mathematics. Equations https://github.com/mattam82/Coq-Equations is an example of plugin that used to rely heavily on axiom K and has been changed to be configurable (and thus usable in a HoTT / UniMath context).
A famous of axiom-free library is the math-comp library https:// github.com/math-comp/math-comp, where booleans are used to represent decidable properties instead. And yet, this library is mostly designed to do maths.
On the other hand, there are examples of libraries choosing to adopt classical axioms and to assume them. The best example is Arthur Charguéraud’s TLC library https://gitlab.inria.fr/charguer/tlc which uses quite strong classical axioms (including the axiom of indefinite description, which is also something that can usually be found in HOL theorem provers). Another (heavily relied on) example is the Reals library in Coq’s standard library. It was recently suggested to split the axioms of the Reals library in two parts: constructive and classical layers. I personally regret that Coq doesn’t have a sort of interface files + virtual library mechanism, i.e. the file containing the Reals axioms could be “instantiated”, if the user wants to, by an actual implementation and proofs of these axioms.
The axiom of functional extensionality is I think one of the most easily admitted axiom. It is a consequence of many others, including the axiom of univalence.
I share this confusion around people avoiding axioms. Thanks for the explanation @Zimmi48; I actually didn’t know those axioms were inconsistent.
In my own work I use functional extensionality and proof irrelevance without thinking too much of it, but for some reason I avoid excluded middle (for which I have no justification; I’ve just gotten used to thinking intuitionistically). I’m not doing theoretical work but building systems, and it seems silly to get lost in the details of axioms when the TCB includes our specifications, which are much less trustworthy than Coq or its foundations.
Even if you do buy into minimizing the TCB, you’re relying on the soundness of Coq, and it can become unsound because of software bugs as much as due to problems in the theory. What I don’t see people talking about is the set of features of Coq they use, even though experimental and less-well-tested features are less likely to be sound than simpler, older, well-tested ones. For example, universe polymorphism and vm_compute have had soundness issues in the past, and newer features like native compute, primitive projections, and probably a number of options you can set are all candidates for undiscovered soundness issues (I think there was actually a soundness bug in primitive projections at one point). The termination checker is also complicated, so you might worry about the set of types used for recursion.
For example, universe polymorphism and vm_compute have had soundness issues in the past, and newer features like native compute, primitive projections, and probably a number of options you can set are all candidates for undiscovered soundness issues (I think there was actually a soundness bug in primitive projections at one point). The termination checker is also complicated, so you might worry about the set of types used for recursion.
There were indeed two critical bugs with primitive projections that were found quickly after the feature was introduced.
OTOH, the currently known critical bug https://github.com/coq/coq/issues/9294 has to do with template polymorphism, a feature that has been there for a long time but that was never well understood and that is likely to go away soon in favor of a restricted from of full universe polymorphism, which is apparently better understood. Other features that are less well understood and may still contain critical bugs even if they have been there for a long time include advanced uses of the guard condition indeed, and the module system (when doing weird things with functors).
I agree in “standalone proofs”. Proof libraries are trickier, because axioms don’t always compose: if you load two libraries with incompatible axioms, you have a problem. Say, if you are using TLC for some utilities, but need -impredicative-set (which is incompatible), you might need to rewrite significant parts of your code.
I seem to remember some implementation of locally nameless (from TLC?) relying on excluded middle to decide equality of variables. While of course that’s legitimate, that’s one case that is easy enough to avoid, and I prefer libraries that do.
EDIT: to be sure, the only use case I know for impredicative Set is Atkey’s denotational semantics for System F, and even that can be avoided well enough (for many but not all purposes) by using logical relations instead.
First, an interesting feature of the upcoming SProp sort is the ability to add axioms in SProp that will not endanger canonicity of the computational part of Coq (funext for example), while having built-in definitional proof-irrelevance (somehow, giving the same axiomatic freedom in Prop as OTT prefigured). The current setup of Prop + proof-irrelevance + funext is mostly fine though: it is compatible with extraction and provides a good base line for verification of programs / systems (when you don’t need univalence), and is backed up by the Set theoretic models we know of. UIP, which is implied by proof irrelevance, is really not necessary in practice to do dependent pattern-matching, so I’d rather advocate staying away from it when writing programs. It is a fine shortcut when reasoning about programs though, for which SProp will provide a better story in the future (when we’ll allow an equality in SProp that you can eliminate to Type, soundly). Consistent classical axioms could be added the same way in SProp, and it’s an interesting research question to see if each one really “adds” something to the theory or not (in terms of the class of definable computable functions for example).
Second, the axioms you list force you into a particular model indeed, so always have to be added with care. Even functional extensionality: it essentially says that functions have no effect, but one can give models of the theory where they do: see Boulier et al’s “The next 700 syntactic models of type theory”.
Third, Univalence provides a kind of mother-of-all axioms: funext, propext, proof-irrelevance, UIP for sets (where it should really live) all follow from it, some “by definition”. One can add classical axioms in hProp as well like excluded middle, but contrarily to Prop/Sprop, this will have an effect on the computational content of computational things in Type: the hProp/computational type interface is richer than what Prop’s singleton elimination allows. In particular, hProp enjoys the unique choice principle which says that on can extract witnesses of propositional unique existence proofs, e.g || Sigma ! x : A, P x || -> A. For that to work computationally, one needs to keep witnesses of such proofs: if you used excluded middle to build the unique existence proof, the use of unique choice will get stuck. In other words, all proofs are now relevant computationally / intentionally, even if they are irrelevant propositionally. Accordingly, we can no longer have an interesting proof-erasure procedure like the one that exists for Prop/SProp, where such a principle of unique choice cannot be defined.
First, an interesting feature of the upcoming SProp sort is the ability to add axioms in SProp that will not endanger canonicity of the computational part of Coq (funext for example)
funext breaks canonicity, for instance the proof that @id bool = (fun b => if b then true else false) is blocked / not equal to refl (which doesn’t even have the right type) (matching on it to produce always 0 also is a blocked term, if you want the formulation of canonicity which just talks about nat).
Sure, assuming FE in Prop breaks canonicity. But after mattam82’s post I was hoping that SProp somehow fixes this situation. Is there any hope that we get both FE and canonicity in some future system? Or is the price for FE the loss of canonicity?
So does Setoid TT, which can be implemented via a program translation into CIC + SProp. The price to pay is that it validates definitional UIP, which depending on your background can actually be a clear bonus.
For the non logicians among the Coq users it would be helpful to have something like Declare Axioms Incompatible name1 name2
and Declare Axiom CoqCompatible name.
Maybe one could also have classes like Declare Axiom Controversial name
and Declare Axiom Temporary name
to explicitly encode the knowledge about well understood axioms. In case one loads libraries using incompatible axioms, one should get an error (which should be possible to convert to a warning via command line options). Print Assumptions could have an option to ignore well known axioms. Users interested in such question could grep the library for such declarations.