Why is Coq consistent? What is the intended semantics?

The closest answer in the FAQ is probably this one: https://github.com/coq/coq/wiki/Presentation#what-do-i-have-to-trust-when-i-see-a-proof-checked-by-coq

I agree that this should also be in the manual.

Also, one specific point I haven’t seen discussed “philosophically” anywhere is the reasonableness of assuming the existence of inaccessible cardinals. There may be some papers somewhere, but likely not readable by (accessible to) regular Coq users.


I’m provide references later, but there is a proof that a part of Coq is SN.

There is the set theoretical model of pCuIC and the HoTT model.

All cover large parts of Coq, but not all, as far as I know.

I think it’s worth pointing out that while the kernel discussion in this post is in some sense distinct from the topic here, it also overlaps quite a lot.

I feels like this section, from the FAQ question that you just linked to, is in serious need of being rewritten:

I think the argument regarding the features that are used should be removed entirely. Furthermore, something should be added about the reinforced need to trust the OCaml compiler when using native compute. I don’t feel confident enough to write it, so I’d encourage anyone who does to do so.

Strong normalization implies consistency.
I believe this argument is due to Thierry, e.g. p15 here.

However, the last time this came up.
It was not clear whether Coq is SN.

There is the Set theoretic model for pCuIC, but I don’t think this is claimed to cover all corners of Coq

And fairly elaborate semantics of Coq in HoTT; see references here:

Am I overlooking something?


Coq is still not strongly normalizing (this is folklore among Coq developers):

Fixpoint f (n : nat) := let a := f n in 0.
Eval cbn in f 0.
Fail Eval cbv in f 0.
The command has indeed failed with message:
Stack overflow.

However, this is IIUC not endangering the consistency of Coq, even if it would be nicer to simply disallow this kind of definitions.

I think that warrants a bug report. The paper on the set-theoretical model conjectures that pCuIC is SN.
@mattam ?

@Zimmi48 has this been documented? It sounds like something that should be fixed if we want the implementation of Coq to follow pCuIC.

You meant to ping @mattam82 in your previous message. BTW, he shared this paper yesterday, where it is again assumed that pCuIC is strongly normalizing.

I don’t know if this is documented anywhere, and I didn’t find any issue about it. Feel free to create one.


Indeed, for a calculus based on eliminators of inductive types which is only a subset of the definitions that are accepted by Coq’s guardness check. Fixing the guard w.r.t. it’s unfolding behavior to ensure any definition can indeed be translated to eliminators is on my list :slight_smile: One fix could be to ensure that the reductions the guard does are safe: they never make a recursive call disappear (a kind of linearity condition).


Thanks. For the record, Coq is supposed to implement PCuIC and is therefore expected to be SN?
So, this should be considered a “bug” ? Would it help if we opened an issue?

I think the guard issue is already reported (in any case it’s always been on my mind :slight_smile: )

Other than that, indeed it is supposed to implement PCuIC and be consistent and SN. For the SN proof the closest we have is Barras’ work in his habilitation for a model where the guard is represented using sized types instead of a syntactic check IIRC. No such bad examples would typecheck in this system.

That was what I thought to, but I was unable to find the issue.

About the (categorical) semantics of Coq. Inductive families can be reduced to W-types; see here. These are claimed to be similar to the inductive families of Coq. However, I am not sure I’ve ever seen a precise result along these lines.
Voevodsky has made a similar reduction for Coq’s inductive types, but I am not sure it’s entirely up to date, or complete.

1 Like

But IIUC, the reduction to W-types doesn’t work in standard ITT, because of the lack of functional extensionality; this is explained, for instance in http://www.cs.nott.ac.uk/~psztxa/publ/obseqnow.pdf.

1 Like

Yes, but it would still be a substantial conceptual simplification for both the set theoretic and the homotopical interpretation of Coq.

@mattam82, if I understand your new equations paper correctly, it addresses this issue by compiling to eliminators, (and a similar thing is done in lean).

Yes, Equations can define functions using the eliminators for accessibility and case analysis, so relies only on eliminators and their defining equations. However by default it still compiles down functions using the struct fix primitives. Adding a mode where we instead try to define it with the standard eliminator instead is a relatively easy TODO

Speaking of compiling to eliminators, I would expect that the specification

0 + y = y
S x + y = S (x + y)

compiles to

Definition add : nat -> nat -> nat :=
  nat_rect _ (fun y => y) (fun x a y => S (a y)).

Equations, on the other hand, compiles the above specification to

Definition Add : nat -> nat -> nat :=
  fix F x y := match x with
                 | 0 => fun y0 => y0
                 | S x' => fun y0 => S (F x' y0)
                 end y.

which is not definitionally equal to a formulation with the eliminator nat_rect. Is there a reason that Equations produces this unnecessarily complicated code? Why not compile to ``add```? Or, to the definitionally equal

Definition ADD : nat -> nat -> nat :=
  fix F x := match x with
               | 0 => fun y => y
               | S x' => fun y => S (F x' y)

in case the compiler doesn’t distinguish between primitive recursion and structural recursion?

In principle it can be adapted to do that, but that’s still on my TODO list