How does one access the dependent type unification algorithm from Coq's internals -- especially the one from apply and the substitution solution?

Cross posting because I’ve found that the discussions allows on this cite (that are not usually allowed on stack overflow) can be very useful. So posting:


TLDR: I want to be able to compare two terms – one with a hole and the other without the hole – and extract the actual lambda term that complete the term. Either in Coq or in OCaml or a Coq plugin or in anyway really.

For example, as a toy example say I have the theorem:

Theorem add_easy_0'':
forall n:nat,
  0 + n = n.
Proof.

The (lambda term) proof for this is:

fun n : nat => eq_refl : 0 + n = n

if you had a partial proof script say:

Theorem add_easy_0'':
forall n:nat,
  0 + n = n.
Proof.
    Show Proof.
    intros.
    Show Proof.

Inspected the proof you would get as your partial lambda proof as:

(fun n : nat => ?Goal)

but in fact you can close the proof and therefore implicitly complete the term with the ddt unification algorithm using apply:

Theorem add_easy_0'':
forall n:nat,
  0 + n = n.
Proof.
    Show Proof.
    intros.
    Show Proof.
    apply (fun n : nat => eq_refl : 0 + n = n).
    Show Proof.
Qed.

This closes the proof but goes not give you the solution for ?Goal – though obviously Coq must have solved the CIC/ddt/Coq unification problem implicitly and closes the goals. I want to get the substitution solution from apply.

How does one do this from Coq’s internals? Ideally while remaining in Coq but OCaml internals or coq plugin solutions or in fact any solution I am happy with.


Appendix 1: how did I realize apply must use some sort of “coq unification”

I knew that apply must be doing this because in the description of the apply tactic I know apply must be using unification due to it saying this:

This tactic applies to any goal. The argument term is a term well-formed in the local context. The tactic apply tries to match the current goal against the conclusion of the type of term. If it succeeds, then the tactic returns as many subgoals as the number of non-dependent premises of the type of term.

This is very very similar to what I once saw in a lecture for unification in Isabelle:

enter image description here

with some notes on what that means:

- You have/know rule [[A1; … ;An]] => A (*)
	- that says: that given A1; …; An facts then you can conclude A
	- or in backwards reasoning, if you want to conclude A, then you must give a proof of A1; …;An (or know Ai's are true)
- you want to close the proof of [[B1; …; Bm]] => C (**) (since thats your subgoal)
	- so you already have the assumptions B1; …; Bm lying around for you, but you wish to be able to conclude C
- Say you want to transform subgoal (**) using rule (*). Then this is what’s going on:
	- first you need to see if your subgoal (**) is a "special case" of your rule (*).  You commence by checking if the conclusion (targets) of the rules are "equivalent". If the conclusions match then instead of showing C you can now show A instead. But for you to have (or show) A, you now need to show A1; … ;An using the substitution that made C and A match. The reason you need to show A1;...;An is because if you show them you get A automatically according to rule (*) -- which by the "match" (unification) shows the original goal you were after. The main catch is that you need to do this by using the substitution that made A and C match. So:
	- first see if you can “match” A and C. The conclusions from both side must match. This matching is called unification and returns a substitution sig that makes the terms equal
		- sig = Unify(A,C) s.t. sig(A) = sig(C)
	- then because we transformed the subgoal (**) using rule (*), we must then proceed to prove the obligations from the rule (*) we used to match to conclusion of the subgoal (**). from the assumptions of the original subgoal in (**) (since those are still true) but using the substitution sig that makes the rules match.
- so the new subgoals if we match the current subgoal (*) with rule (**) is:
	- [[sig(B1); … ; sig(Bm) ]] => sigm(A1)
	- ...
	- [[sig(B1); … ; sig(Bm) ]] => sigm(An)
- Completing/closing the proof above (i.e. proving it) shows/proves:
	- [[sig(B1); …;sig(Bm) ]] => sig(C)
- Command: apply (rule <(*)>) where (*) stands for the rule names

Appendix2: why not exact?

Note that initially I thought exact was the Coq tactic I wanted to intercept but I was wrong I believe. My notes on exact:

- exact p. (assuming p has type U). 
	- closes a proof if the goal term T (i.e. a Type) matches the type of the given term p.
		- succeeds iff T and U are convertible (basically, intuitively if they unify https://coq.inria.fr/refman/language/core/conversion.html#conversion-rules since are saying if T is convertible to U)

conversion seems to be equality check not really unification i.e. it doesn’t try to solve a system of symbolic equations.


Appendix 3: Recall unification

brief notes:

- unification https://en.wikipedia.org/wiki/Unification_(computer_science)
	- an algorithm that solves a system of equations between symbolic expressions/terms
		- i.e. you want 
			- cons2( cons1( x, y, ...,) ..., cons3(a, b, c), ... ) = cons1(x, nil) 
			- x = y
			- basically a bunch of term LHS term RHS and want to know if you can make them all equal given the terms/values and variables in them...
				- term1 = term2, term3 = term4 ? with some variables perhaps.
			- the solution is the substitution of the variables that satisfies all the equations

cross:

https://www.reddit.com/r/Coq/comments/vzrq5u/how_does_one_access_the_dependent_type/